00:00:00.001 Started by upstream project "autotest-nightly" build number 4315 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3678 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.212 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.212 The recommended git tool is: git 00:00:00.213 using credential 00000000-0000-0000-0000-000000000002 00:00:00.214 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.246 Fetching changes from the remote Git repository 00:00:00.248 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.282 Using shallow fetch with depth 1 00:00:00.282 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.282 > git --version # timeout=10 00:00:00.309 > git --version # 'git version 2.39.2' 00:00:00.309 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.332 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.332 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.243 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.256 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.268 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.268 > git config core.sparsecheckout # timeout=10 00:00:08.278 > git read-tree -mu HEAD # timeout=10 00:00:08.293 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.315 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.315 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.400 [Pipeline] Start of Pipeline 00:00:08.414 [Pipeline] library 00:00:08.416 Loading library shm_lib@master 00:00:08.416 Library shm_lib@master is cached. Copying from home. 00:00:08.435 [Pipeline] node 00:00:08.447 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.448 [Pipeline] { 00:00:08.456 [Pipeline] catchError 00:00:08.457 [Pipeline] { 00:00:08.466 [Pipeline] wrap 00:00:08.473 [Pipeline] { 00:00:08.478 [Pipeline] stage 00:00:08.479 [Pipeline] { (Prologue) 00:00:08.497 [Pipeline] echo 00:00:08.498 Node: VM-host-SM38 00:00:08.505 [Pipeline] cleanWs 00:00:08.516 [WS-CLEANUP] Deleting project workspace... 00:00:08.516 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.524 [WS-CLEANUP] done 00:00:08.701 [Pipeline] setCustomBuildProperty 00:00:08.786 [Pipeline] httpRequest 00:00:09.377 [Pipeline] echo 00:00:09.379 Sorcerer 10.211.164.20 is alive 00:00:09.388 [Pipeline] retry 00:00:09.390 [Pipeline] { 00:00:09.403 [Pipeline] httpRequest 00:00:09.407 HttpMethod: GET 00:00:09.408 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.408 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.410 Response Code: HTTP/1.1 200 OK 00:00:09.411 Success: Status code 200 is in the accepted range: 200,404 00:00:09.411 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.284 [Pipeline] } 00:00:12.302 [Pipeline] // retry 00:00:12.310 [Pipeline] sh 00:00:12.594 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.614 [Pipeline] httpRequest 00:00:13.056 [Pipeline] echo 00:00:13.058 Sorcerer 10.211.164.20 is alive 00:00:13.067 [Pipeline] retry 00:00:13.069 [Pipeline] { 00:00:13.085 [Pipeline] httpRequest 00:00:13.091 HttpMethod: GET 00:00:13.091 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.092 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.101 Response Code: HTTP/1.1 200 OK 00:00:13.102 Success: Status code 200 is in the accepted range: 200,404 00:00:13.102 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:10.746 [Pipeline] } 00:01:10.764 [Pipeline] // retry 00:01:10.771 [Pipeline] sh 00:01:11.057 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:14.373 [Pipeline] sh 00:01:14.660 + git -C spdk log --oneline -n5 00:01:14.660 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:14.660 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:14.660 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:14.660 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:14.660 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:14.682 [Pipeline] writeFile 00:01:14.697 [Pipeline] sh 00:01:14.986 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.999 [Pipeline] sh 00:01:15.280 + cat autorun-spdk.conf 00:01:15.281 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.281 SPDK_TEST_NVME=1 00:01:15.281 SPDK_TEST_FTL=1 00:01:15.281 SPDK_TEST_ISAL=1 00:01:15.281 SPDK_RUN_ASAN=1 00:01:15.281 SPDK_RUN_UBSAN=1 00:01:15.281 SPDK_TEST_XNVME=1 00:01:15.281 SPDK_TEST_NVME_FDP=1 00:01:15.281 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.288 RUN_NIGHTLY=1 00:01:15.289 [Pipeline] } 00:01:15.298 [Pipeline] // stage 00:01:15.307 [Pipeline] stage 00:01:15.309 [Pipeline] { (Run VM) 00:01:15.316 [Pipeline] sh 00:01:15.595 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.595 + echo 'Start stage prepare_nvme.sh' 00:01:15.595 Start stage prepare_nvme.sh 00:01:15.595 + [[ -n 1 ]] 00:01:15.595 + disk_prefix=ex1 00:01:15.595 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:15.595 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:15.595 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:15.595 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.595 ++ SPDK_TEST_NVME=1 00:01:15.595 ++ SPDK_TEST_FTL=1 00:01:15.595 ++ SPDK_TEST_ISAL=1 00:01:15.595 ++ SPDK_RUN_ASAN=1 00:01:15.595 ++ SPDK_RUN_UBSAN=1 00:01:15.595 ++ SPDK_TEST_XNVME=1 00:01:15.595 ++ SPDK_TEST_NVME_FDP=1 00:01:15.595 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.595 ++ RUN_NIGHTLY=1 00:01:15.595 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:15.595 + nvme_files=() 00:01:15.595 + declare -A nvme_files 00:01:15.595 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.595 + nvme_files['nvme.img']=5G 00:01:15.595 + nvme_files['nvme-cmb.img']=5G 00:01:15.595 + nvme_files['nvme-multi0.img']=4G 00:01:15.595 + nvme_files['nvme-multi1.img']=4G 00:01:15.595 + nvme_files['nvme-multi2.img']=4G 00:01:15.595 + nvme_files['nvme-openstack.img']=8G 00:01:15.595 + nvme_files['nvme-zns.img']=5G 00:01:15.595 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.595 + (( SPDK_TEST_FTL == 1 )) 00:01:15.595 + nvme_files["nvme-ftl.img"]=6G 00:01:15.595 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.595 + nvme_files["nvme-fdp.img"]=1G 00:01:15.595 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.595 + for nvme in "${!nvme_files[@]}" 00:01:15.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:15.595 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.595 + for nvme in "${!nvme_files[@]}" 00:01:15.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:15.595 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:15.595 + for nvme in "${!nvme_files[@]}" 00:01:15.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:15.595 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.595 + for nvme in "${!nvme_files[@]}" 00:01:15.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:15.595 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.595 + for nvme in "${!nvme_files[@]}" 00:01:15.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:16.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.540 + for nvme in "${!nvme_files[@]}" 00:01:16.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:16.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.540 + for nvme in "${!nvme_files[@]}" 00:01:16.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:16.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.540 + for nvme in "${!nvme_files[@]}" 00:01:16.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:16.540 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:16.540 + for nvme in "${!nvme_files[@]}" 00:01:16.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:17.112 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.112 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:17.112 + echo 'End stage prepare_nvme.sh' 00:01:17.112 End stage prepare_nvme.sh 00:01:17.124 [Pipeline] sh 00:01:17.406 + DISTRO=fedora39 00:01:17.406 + CPUS=10 00:01:17.406 + RAM=12288 00:01:17.406 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.406 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:17.406 00:01:17.406 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:17.406 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:17.406 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:17.406 HELP=0 00:01:17.406 DRY_RUN=0 00:01:17.406 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:17.406 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:17.406 NVME_AUTO_CREATE=0 00:01:17.406 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:17.406 NVME_CMB=,,,, 00:01:17.406 NVME_PMR=,,,, 00:01:17.406 NVME_ZNS=,,,, 00:01:17.406 NVME_MS=true,,,, 00:01:17.406 NVME_FDP=,,,on, 00:01:17.406 SPDK_VAGRANT_DISTRO=fedora39 00:01:17.406 SPDK_VAGRANT_VMCPU=10 00:01:17.406 SPDK_VAGRANT_VMRAM=12288 00:01:17.406 SPDK_VAGRANT_PROVIDER=libvirt 00:01:17.406 SPDK_VAGRANT_HTTP_PROXY= 00:01:17.406 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:17.406 SPDK_OPENSTACK_NETWORK=0 00:01:17.406 VAGRANT_PACKAGE_BOX=0 00:01:17.406 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:17.406 FORCE_DISTRO=true 00:01:17.406 VAGRANT_BOX_VERSION= 00:01:17.406 EXTRA_VAGRANTFILES= 00:01:17.406 NIC_MODEL=e1000 00:01:17.406 00:01:17.406 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:17.406 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:19.951 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.212 ==> default: Creating image (snapshot of base box volume). 00:01:20.473 ==> default: Creating domain with the following settings... 00:01:20.473 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732865590_ca0b0b7aaa4bcf5b9e73 00:01:20.473 ==> default: -- Domain type: kvm 00:01:20.473 ==> default: -- Cpus: 10 00:01:20.473 ==> default: -- Feature: acpi 00:01:20.473 ==> default: -- Feature: apic 00:01:20.473 ==> default: -- Feature: pae 00:01:20.473 ==> default: -- Memory: 12288M 00:01:20.473 ==> default: -- Memory Backing: hugepages: 00:01:20.473 ==> default: -- Management MAC: 00:01:20.473 ==> default: -- Loader: 00:01:20.473 ==> default: -- Nvram: 00:01:20.473 ==> default: -- Base box: spdk/fedora39 00:01:20.473 ==> default: -- Storage pool: default 00:01:20.473 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732865590_ca0b0b7aaa4bcf5b9e73.img (20G) 00:01:20.473 ==> default: -- Volume Cache: default 00:01:20.473 ==> default: -- Kernel: 00:01:20.473 ==> default: -- Initrd: 00:01:20.473 ==> default: -- Graphics Type: vnc 00:01:20.473 ==> default: -- Graphics Port: -1 00:01:20.473 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.473 ==> default: -- Graphics Password: Not defined 00:01:20.473 ==> default: -- Video Type: cirrus 00:01:20.473 ==> default: -- Video VRAM: 9216 00:01:20.473 ==> default: -- Sound Type: 00:01:20.473 ==> default: -- Keymap: en-us 00:01:20.473 ==> default: -- TPM Path: 00:01:20.473 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.473 ==> default: -- Command line args: 00:01:20.473 ==> default: -> value=-device, 00:01:20.473 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.473 ==> default: -> value=-drive, 00:01:20.473 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:20.473 ==> default: -> value=-device, 00:01:20.473 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:20.473 ==> default: -> value=-device, 00:01:20.473 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.473 ==> default: -> value=-drive, 00:01:20.473 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:20.473 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:20.474 ==> default: -> value=-drive, 00:01:20.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.474 ==> default: -> value=-drive, 00:01:20.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.474 ==> default: -> value=-drive, 00:01:20.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:20.474 ==> default: -> value=-drive, 00:01:20.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:20.474 ==> default: -> value=-device, 00:01:20.474 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.736 ==> default: Creating shared folders metadata... 00:01:20.736 ==> default: Starting domain. 00:01:22.652 ==> default: Waiting for domain to get an IP address... 00:01:40.850 ==> default: Waiting for SSH to become available... 00:01:41.792 ==> default: Configuring and enabling network interfaces... 00:01:46.003 default: SSH address: 192.168.121.154:22 00:01:46.003 default: SSH username: vagrant 00:01:46.003 default: SSH auth method: private key 00:01:47.919 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.930 ==> default: Mounting SSHFS shared folder... 00:01:58.235 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.235 ==> default: Checking Mount.. 00:01:59.656 ==> default: Folder Successfully Mounted! 00:01:59.656 00:01:59.656 SUCCESS! 00:01:59.656 00:01:59.656 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:59.656 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.656 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:59.656 00:01:59.668 [Pipeline] } 00:01:59.685 [Pipeline] // stage 00:01:59.697 [Pipeline] dir 00:01:59.698 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:59.700 [Pipeline] { 00:01:59.715 [Pipeline] catchError 00:01:59.718 [Pipeline] { 00:01:59.732 [Pipeline] sh 00:02:00.017 + vagrant ssh-config --host vagrant 00:02:00.017 + sed -ne '/^Host/,$p' 00:02:00.017 + tee ssh_conf 00:02:02.564 Host vagrant 00:02:02.564 HostName 192.168.121.154 00:02:02.564 User vagrant 00:02:02.564 Port 22 00:02:02.564 UserKnownHostsFile /dev/null 00:02:02.564 StrictHostKeyChecking no 00:02:02.564 PasswordAuthentication no 00:02:02.564 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:02.564 IdentitiesOnly yes 00:02:02.564 LogLevel FATAL 00:02:02.564 ForwardAgent yes 00:02:02.564 ForwardX11 yes 00:02:02.564 00:02:02.578 [Pipeline] withEnv 00:02:02.580 [Pipeline] { 00:02:02.594 [Pipeline] sh 00:02:02.879 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:02.879 source /etc/os-release 00:02:02.879 [[ -e /image.version ]] && img=$(< /image.version) 00:02:02.879 # Minimal, systemd-like check. 00:02:02.879 if [[ -e /.dockerenv ]]; then 00:02:02.879 # Clear garbage from the node'\''s name: 00:02:02.879 # agt-er_autotest_547-896 -> autotest_547-896 00:02:02.879 # $HOSTNAME is the actual container id 00:02:02.879 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:02.879 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:02.879 # We can assume this is a mount from a host where container is running, 00:02:02.879 # so fetch its hostname to easily identify the target swarm worker. 00:02:02.879 container="$(< /etc/hostname) ($agent)" 00:02:02.879 else 00:02:02.879 # Fallback 00:02:02.879 container=$agent 00:02:02.879 fi 00:02:02.879 fi 00:02:02.879 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:02.879 ' 00:02:03.154 [Pipeline] } 00:02:03.171 [Pipeline] // withEnv 00:02:03.180 [Pipeline] setCustomBuildProperty 00:02:03.196 [Pipeline] stage 00:02:03.198 [Pipeline] { (Tests) 00:02:03.215 [Pipeline] sh 00:02:03.500 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.776 [Pipeline] sh 00:02:04.061 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.340 [Pipeline] timeout 00:02:04.341 Timeout set to expire in 50 min 00:02:04.343 [Pipeline] { 00:02:04.359 [Pipeline] sh 00:02:04.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:05.216 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:05.232 [Pipeline] sh 00:02:05.515 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:05.792 [Pipeline] sh 00:02:06.077 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.355 [Pipeline] sh 00:02:06.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:06.901 ++ readlink -f spdk_repo 00:02:06.901 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.901 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.901 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.901 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.901 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.901 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.901 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.901 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:06.901 + cd /home/vagrant/spdk_repo 00:02:06.901 + source /etc/os-release 00:02:06.901 ++ NAME='Fedora Linux' 00:02:06.901 ++ VERSION='39 (Cloud Edition)' 00:02:06.901 ++ ID=fedora 00:02:06.901 ++ VERSION_ID=39 00:02:06.901 ++ VERSION_CODENAME= 00:02:06.901 ++ PLATFORM_ID=platform:f39 00:02:06.901 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.901 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.901 ++ LOGO=fedora-logo-icon 00:02:06.901 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.901 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.901 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.901 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.901 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.901 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.901 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.901 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.901 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.901 ++ SUPPORT_END=2024-11-12 00:02:06.901 ++ VARIANT='Cloud Edition' 00:02:06.901 ++ VARIANT_ID=cloud 00:02:06.901 + uname -a 00:02:06.901 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.901 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.425 Hugepages 00:02:07.425 node hugesize free / total 00:02:07.425 node0 1048576kB 0 / 0 00:02:07.425 node0 2048kB 0 / 0 00:02:07.425 00:02:07.425 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.425 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.425 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.425 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:07.425 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:07.687 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:07.687 + rm -f /tmp/spdk-ld-path 00:02:07.687 + source autorun-spdk.conf 00:02:07.687 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.687 ++ SPDK_TEST_NVME=1 00:02:07.687 ++ SPDK_TEST_FTL=1 00:02:07.687 ++ SPDK_TEST_ISAL=1 00:02:07.687 ++ SPDK_RUN_ASAN=1 00:02:07.687 ++ SPDK_RUN_UBSAN=1 00:02:07.687 ++ SPDK_TEST_XNVME=1 00:02:07.687 ++ SPDK_TEST_NVME_FDP=1 00:02:07.687 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.687 ++ RUN_NIGHTLY=1 00:02:07.687 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.687 + [[ -n '' ]] 00:02:07.687 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.687 + for M in /var/spdk/build-*-manifest.txt 00:02:07.687 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.687 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.687 + for M in /var/spdk/build-*-manifest.txt 00:02:07.687 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.688 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.688 + for M in /var/spdk/build-*-manifest.txt 00:02:07.688 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.688 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.688 ++ uname 00:02:07.688 + [[ Linux == \L\i\n\u\x ]] 00:02:07.688 + sudo dmesg -T 00:02:07.688 + sudo dmesg --clear 00:02:07.688 + dmesg_pid=5040 00:02:07.688 + [[ Fedora Linux == FreeBSD ]] 00:02:07.688 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.688 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.688 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.688 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.688 + sudo dmesg -Tw 00:02:07.688 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.688 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.688 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.688 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.688 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.688 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.688 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.688 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.688 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.688 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.688 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.688 07:33:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.688 07:33:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.688 07:33:57 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:07.688 07:33:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.688 07:33:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.949 07:33:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.949 07:33:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:07.949 07:33:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.949 07:33:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.949 07:33:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.949 07:33:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.949 07:33:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.950 07:33:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.950 07:33:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.950 07:33:57 -- paths/export.sh@5 -- $ export PATH 00:02:07.950 07:33:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.950 07:33:57 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.950 07:33:57 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:07.950 07:33:57 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732865637.XXXXXX 00:02:07.950 07:33:57 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732865637.hiYbcW 00:02:07.950 07:33:57 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:07.950 07:33:57 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:07.950 07:33:57 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:07.950 07:33:57 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.950 07:33:57 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.950 07:33:57 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:07.950 07:33:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:07.950 07:33:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.950 07:33:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:07.950 07:33:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:07.950 07:33:57 -- pm/common@17 -- $ local monitor 00:02:07.950 07:33:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.950 07:33:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.950 07:33:57 -- pm/common@25 -- $ sleep 1 00:02:07.950 07:33:57 -- pm/common@21 -- $ date +%s 00:02:07.950 07:33:57 -- pm/common@21 -- $ date +%s 00:02:07.950 07:33:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732865637 00:02:07.950 07:33:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732865637 00:02:07.950 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732865637_collect-cpu-load.pm.log 00:02:07.950 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732865637_collect-vmstat.pm.log 00:02:08.893 07:33:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:08.893 07:33:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.893 07:33:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.893 07:33:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.893 07:33:58 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.893 Fri Nov 29 07:33:58 AM UTC 2024 00:02:08.893 07:33:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.893 v25.01-pre-276-g35cd3e84d 00:02:08.893 07:33:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:08.893 07:33:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:08.893 07:33:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.893 07:33:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.893 07:33:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.893 ************************************ 00:02:08.893 START TEST asan 00:02:08.893 ************************************ 00:02:08.893 using asan 00:02:08.893 07:33:58 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:08.893 00:02:08.893 real 0m0.000s 00:02:08.893 user 0m0.000s 00:02:08.893 sys 0m0.000s 00:02:08.893 07:33:58 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.893 ************************************ 00:02:08.893 END TEST asan 00:02:08.893 ************************************ 00:02:08.893 07:33:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.893 07:33:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.893 07:33:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.893 07:33:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.893 07:33:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.893 07:33:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.893 ************************************ 00:02:08.893 START TEST ubsan 00:02:08.893 ************************************ 00:02:08.893 using ubsan 00:02:08.893 07:33:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:08.893 00:02:08.893 real 0m0.000s 00:02:08.893 user 0m0.000s 00:02:08.893 sys 0m0.000s 00:02:08.893 ************************************ 00:02:08.893 END TEST ubsan 00:02:08.893 ************************************ 00:02:08.893 07:33:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.893 07:33:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.154 07:33:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.154 07:33:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.154 07:33:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.154 07:33:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.154 07:33:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.154 07:33:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.154 07:33:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.155 07:33:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.155 07:33:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:09.155 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.155 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.416 Using 'verbs' RDMA provider 00:02:20.366 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.357 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.874 Creating mk/config.mk...done. 00:02:30.874 Creating mk/cc.flags.mk...done. 00:02:30.874 Type 'make' to build. 00:02:30.874 07:34:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:30.874 07:34:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.874 07:34:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.874 07:34:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.874 ************************************ 00:02:30.874 START TEST make 00:02:30.874 ************************************ 00:02:30.874 07:34:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:31.132 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:31.132 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:31.132 meson setup builddir \ 00:02:31.132 -Dwith-libaio=enabled \ 00:02:31.132 -Dwith-liburing=enabled \ 00:02:31.132 -Dwith-libvfn=disabled \ 00:02:31.132 -Dwith-spdk=disabled \ 00:02:31.132 -Dexamples=false \ 00:02:31.132 -Dtests=false \ 00:02:31.132 -Dtools=false && \ 00:02:31.132 meson compile -C builddir && \ 00:02:31.132 cd -) 00:02:31.132 make[1]: Nothing to be done for 'all'. 00:02:33.031 The Meson build system 00:02:33.031 Version: 1.5.0 00:02:33.031 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.031 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.031 Build type: native build 00:02:33.031 Project name: xnvme 00:02:33.031 Project version: 0.7.5 00:02:33.031 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.031 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.031 Host machine cpu family: x86_64 00:02:33.031 Host machine cpu: x86_64 00:02:33.031 Message: host_machine.system: linux 00:02:33.031 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.031 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.031 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.031 Run-time dependency threads found: YES 00:02:33.031 Has header "setupapi.h" : NO 00:02:33.031 Has header "linux/blkzoned.h" : YES 00:02:33.031 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.031 Has header "libaio.h" : YES 00:02:33.031 Library aio found: YES 00:02:33.031 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.031 Run-time dependency liburing found: YES 2.2 00:02:33.031 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.031 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.031 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:33.031 Subproject spdk : skipped: feature with-spdk disabled 00:02:33.031 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.031 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.031 Library rt found: YES 00:02:33.031 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.031 Configuring xnvme_config.h using configuration 00:02:33.031 Configuring xnvme.spec using configuration 00:02:33.031 Run-time dependency bash-completion found: YES 2.11 00:02:33.031 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.031 Program cp found: YES (/usr/bin/cp) 00:02:33.031 Build targets in project: 3 00:02:33.031 00:02:33.031 xnvme 0.7.5 00:02:33.031 00:02:33.031 Subprojects 00:02:33.031 spdk : NO Feature 'with-spdk' disabled 00:02:33.031 00:02:33.031 User defined options 00:02:33.031 examples : false 00:02:33.031 tests : false 00:02:33.031 tools : false 00:02:33.031 with-libaio : enabled 00:02:33.031 with-liburing: enabled 00:02:33.031 with-libvfn : disabled 00:02:33.031 with-spdk : disabled 00:02:33.031 00:02:33.031 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.291 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:33.291 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:33.291 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:33.291 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:33.291 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:33.291 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:33.291 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:33.291 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:33.291 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:33.291 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:33.291 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:33.550 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:33.550 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:33.550 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:33.550 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:33.550 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:33.550 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:33.550 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:33.550 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:33.550 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:33.550 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:33.550 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:33.550 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:33.550 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:33.550 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:33.550 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:33.550 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:33.550 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:33.550 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:33.550 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:33.550 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:33.550 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:33.550 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:33.550 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:33.550 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:33.550 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:33.550 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:33.550 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:33.550 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:33.550 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:33.550 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:33.550 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:33.550 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:33.550 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:33.550 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:33.550 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:33.550 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:33.820 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:33.820 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:33.820 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:33.820 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:33.821 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:33.821 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:33.821 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:33.821 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:33.821 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:33.821 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:33.821 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:33.821 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:33.821 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:33.821 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:33.821 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:33.821 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:33.821 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:33.821 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:33.821 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:33.821 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:33.821 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:33.821 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:33.821 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:33.821 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:34.078 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:34.078 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:34.078 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:34.335 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:34.335 [75/76] Linking static target lib/libxnvme.a 00:02:34.335 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:34.335 INFO: autodetecting backend as ninja 00:02:34.335 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:34.335 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:40.988 The Meson build system 00:02:40.988 Version: 1.5.0 00:02:40.988 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:40.988 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:40.988 Build type: native build 00:02:40.988 Program cat found: YES (/usr/bin/cat) 00:02:40.988 Project name: DPDK 00:02:40.988 Project version: 24.03.0 00:02:40.988 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.988 C linker for the host machine: cc ld.bfd 2.40-14 00:02:40.988 Host machine cpu family: x86_64 00:02:40.988 Host machine cpu: x86_64 00:02:40.988 Message: ## Building in Developer Mode ## 00:02:40.988 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.988 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:40.988 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.988 Program python3 found: YES (/usr/bin/python3) 00:02:40.988 Program cat found: YES (/usr/bin/cat) 00:02:40.988 Compiler for C supports arguments -march=native: YES 00:02:40.988 Checking for size of "void *" : 8 00:02:40.988 Checking for size of "void *" : 8 (cached) 00:02:40.988 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:40.988 Library m found: YES 00:02:40.988 Library numa found: YES 00:02:40.988 Has header "numaif.h" : YES 00:02:40.988 Library fdt found: NO 00:02:40.988 Library execinfo found: NO 00:02:40.988 Has header "execinfo.h" : YES 00:02:40.988 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.988 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.988 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.988 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.988 Run-time dependency openssl found: YES 3.1.1 00:02:40.988 Run-time dependency libpcap found: YES 1.10.4 00:02:40.988 Has header "pcap.h" with dependency libpcap: YES 00:02:40.988 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.988 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.988 Compiler for C supports arguments -Wformat: YES 00:02:40.988 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.988 Compiler for C supports arguments -Wformat-security: NO 00:02:40.988 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.988 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.988 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.988 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.988 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.988 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.988 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.988 Compiler for C supports arguments -Wundef: YES 00:02:40.988 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.988 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.988 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.988 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.988 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.988 Program objdump found: YES (/usr/bin/objdump) 00:02:40.988 Compiler for C supports arguments -mavx512f: YES 00:02:40.988 Checking if "AVX512 checking" compiles: YES 00:02:40.988 Fetching value of define "__SSE4_2__" : 1 00:02:40.988 Fetching value of define "__AES__" : 1 00:02:40.988 Fetching value of define "__AVX__" : 1 00:02:40.989 Fetching value of define "__AVX2__" : 1 00:02:40.989 Fetching value of define "__AVX512BW__" : 1 00:02:40.989 Fetching value of define "__AVX512CD__" : 1 00:02:40.989 Fetching value of define "__AVX512DQ__" : 1 00:02:40.989 Fetching value of define "__AVX512F__" : 1 00:02:40.989 Fetching value of define "__AVX512VL__" : 1 00:02:40.989 Fetching value of define "__PCLMUL__" : 1 00:02:40.989 Fetching value of define "__RDRND__" : 1 00:02:40.989 Fetching value of define "__RDSEED__" : 1 00:02:40.989 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.989 Fetching value of define "__znver1__" : (undefined) 00:02:40.989 Fetching value of define "__znver2__" : (undefined) 00:02:40.989 Fetching value of define "__znver3__" : (undefined) 00:02:40.989 Fetching value of define "__znver4__" : (undefined) 00:02:40.989 Library asan found: YES 00:02:40.989 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.989 Message: lib/log: Defining dependency "log" 00:02:40.989 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.989 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.989 Library rt found: YES 00:02:40.989 Checking for function "getentropy" : NO 00:02:40.989 Message: lib/eal: Defining dependency "eal" 00:02:40.989 Message: lib/ring: Defining dependency "ring" 00:02:40.989 Message: lib/rcu: Defining dependency "rcu" 00:02:40.989 Message: lib/mempool: Defining dependency "mempool" 00:02:40.989 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.989 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.989 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.989 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.989 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.989 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.989 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.989 Compiler for C supports arguments -mpclmul: YES 00:02:40.989 Compiler for C supports arguments -maes: YES 00:02:40.989 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.989 Compiler for C supports arguments -mavx512bw: YES 00:02:40.989 Compiler for C supports arguments -mavx512dq: YES 00:02:40.989 Compiler for C supports arguments -mavx512vl: YES 00:02:40.989 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.989 Compiler for C supports arguments -mavx2: YES 00:02:40.989 Compiler for C supports arguments -mavx: YES 00:02:40.989 Message: lib/net: Defining dependency "net" 00:02:40.989 Message: lib/meter: Defining dependency "meter" 00:02:40.989 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.989 Message: lib/pci: Defining dependency "pci" 00:02:40.989 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.989 Message: lib/hash: Defining dependency "hash" 00:02:40.989 Message: lib/timer: Defining dependency "timer" 00:02:40.989 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.989 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.989 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.989 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.989 Message: lib/power: Defining dependency "power" 00:02:40.989 Message: lib/reorder: Defining dependency "reorder" 00:02:40.989 Message: lib/security: Defining dependency "security" 00:02:40.989 Has header "linux/userfaultfd.h" : YES 00:02:40.989 Has header "linux/vduse.h" : YES 00:02:40.989 Message: lib/vhost: Defining dependency "vhost" 00:02:40.989 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.989 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.989 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.989 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.989 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:40.989 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:40.989 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:40.989 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:40.989 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:40.989 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:40.989 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.989 Configuring doxy-api-html.conf using configuration 00:02:40.989 Configuring doxy-api-man.conf using configuration 00:02:40.989 Program mandb found: YES (/usr/bin/mandb) 00:02:40.989 Program sphinx-build found: NO 00:02:40.989 Configuring rte_build_config.h using configuration 00:02:40.989 Message: 00:02:40.989 ================= 00:02:40.989 Applications Enabled 00:02:40.989 ================= 00:02:40.989 00:02:40.989 apps: 00:02:40.989 00:02:40.989 00:02:40.989 Message: 00:02:40.989 ================= 00:02:40.989 Libraries Enabled 00:02:40.989 ================= 00:02:40.989 00:02:40.989 libs: 00:02:40.989 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.989 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:40.989 cryptodev, dmadev, power, reorder, security, vhost, 00:02:40.989 00:02:40.989 Message: 00:02:40.989 =============== 00:02:40.989 Drivers Enabled 00:02:40.989 =============== 00:02:40.989 00:02:40.989 common: 00:02:40.989 00:02:40.989 bus: 00:02:40.989 pci, vdev, 00:02:40.989 mempool: 00:02:40.989 ring, 00:02:40.989 dma: 00:02:40.989 00:02:40.989 net: 00:02:40.989 00:02:40.989 crypto: 00:02:40.989 00:02:40.989 compress: 00:02:40.989 00:02:40.989 vdpa: 00:02:40.989 00:02:40.989 00:02:40.989 Message: 00:02:40.989 ================= 00:02:40.989 Content Skipped 00:02:40.989 ================= 00:02:40.989 00:02:40.989 apps: 00:02:40.989 dumpcap: explicitly disabled via build config 00:02:40.989 graph: explicitly disabled via build config 00:02:40.989 pdump: explicitly disabled via build config 00:02:40.989 proc-info: explicitly disabled via build config 00:02:40.989 test-acl: explicitly disabled via build config 00:02:40.989 test-bbdev: explicitly disabled via build config 00:02:40.989 test-cmdline: explicitly disabled via build config 00:02:40.989 test-compress-perf: explicitly disabled via build config 00:02:40.989 test-crypto-perf: explicitly disabled via build config 00:02:40.989 test-dma-perf: explicitly disabled via build config 00:02:40.989 test-eventdev: explicitly disabled via build config 00:02:40.989 test-fib: explicitly disabled via build config 00:02:40.989 test-flow-perf: explicitly disabled via build config 00:02:40.989 test-gpudev: explicitly disabled via build config 00:02:40.989 test-mldev: explicitly disabled via build config 00:02:40.989 test-pipeline: explicitly disabled via build config 00:02:40.989 test-pmd: explicitly disabled via build config 00:02:40.989 test-regex: explicitly disabled via build config 00:02:40.989 test-sad: explicitly disabled via build config 00:02:40.989 test-security-perf: explicitly disabled via build config 00:02:40.989 00:02:40.989 libs: 00:02:40.989 argparse: explicitly disabled via build config 00:02:40.989 metrics: explicitly disabled via build config 00:02:40.989 acl: explicitly disabled via build config 00:02:40.989 bbdev: explicitly disabled via build config 00:02:40.989 bitratestats: explicitly disabled via build config 00:02:40.989 bpf: explicitly disabled via build config 00:02:40.989 cfgfile: explicitly disabled via build config 00:02:40.989 distributor: explicitly disabled via build config 00:02:40.989 efd: explicitly disabled via build config 00:02:40.989 eventdev: explicitly disabled via build config 00:02:40.989 dispatcher: explicitly disabled via build config 00:02:40.989 gpudev: explicitly disabled via build config 00:02:40.989 gro: explicitly disabled via build config 00:02:40.989 gso: explicitly disabled via build config 00:02:40.989 ip_frag: explicitly disabled via build config 00:02:40.989 jobstats: explicitly disabled via build config 00:02:40.989 latencystats: explicitly disabled via build config 00:02:40.989 lpm: explicitly disabled via build config 00:02:40.989 member: explicitly disabled via build config 00:02:40.989 pcapng: explicitly disabled via build config 00:02:40.989 rawdev: explicitly disabled via build config 00:02:40.989 regexdev: explicitly disabled via build config 00:02:40.989 mldev: explicitly disabled via build config 00:02:40.989 rib: explicitly disabled via build config 00:02:40.989 sched: explicitly disabled via build config 00:02:40.989 stack: explicitly disabled via build config 00:02:40.989 ipsec: explicitly disabled via build config 00:02:40.989 pdcp: explicitly disabled via build config 00:02:40.989 fib: explicitly disabled via build config 00:02:40.989 port: explicitly disabled via build config 00:02:40.989 pdump: explicitly disabled via build config 00:02:40.989 table: explicitly disabled via build config 00:02:40.989 pipeline: explicitly disabled via build config 00:02:40.989 graph: explicitly disabled via build config 00:02:40.989 node: explicitly disabled via build config 00:02:40.989 00:02:40.989 drivers: 00:02:40.989 common/cpt: not in enabled drivers build config 00:02:40.989 common/dpaax: not in enabled drivers build config 00:02:40.989 common/iavf: not in enabled drivers build config 00:02:40.989 common/idpf: not in enabled drivers build config 00:02:40.989 common/ionic: not in enabled drivers build config 00:02:40.989 common/mvep: not in enabled drivers build config 00:02:40.989 common/octeontx: not in enabled drivers build config 00:02:40.989 bus/auxiliary: not in enabled drivers build config 00:02:40.989 bus/cdx: not in enabled drivers build config 00:02:40.989 bus/dpaa: not in enabled drivers build config 00:02:40.989 bus/fslmc: not in enabled drivers build config 00:02:40.989 bus/ifpga: not in enabled drivers build config 00:02:40.989 bus/platform: not in enabled drivers build config 00:02:40.989 bus/uacce: not in enabled drivers build config 00:02:40.989 bus/vmbus: not in enabled drivers build config 00:02:40.989 common/cnxk: not in enabled drivers build config 00:02:40.989 common/mlx5: not in enabled drivers build config 00:02:40.989 common/nfp: not in enabled drivers build config 00:02:40.989 common/nitrox: not in enabled drivers build config 00:02:40.989 common/qat: not in enabled drivers build config 00:02:40.989 common/sfc_efx: not in enabled drivers build config 00:02:40.989 mempool/bucket: not in enabled drivers build config 00:02:40.989 mempool/cnxk: not in enabled drivers build config 00:02:40.989 mempool/dpaa: not in enabled drivers build config 00:02:40.990 mempool/dpaa2: not in enabled drivers build config 00:02:40.990 mempool/octeontx: not in enabled drivers build config 00:02:40.990 mempool/stack: not in enabled drivers build config 00:02:40.990 dma/cnxk: not in enabled drivers build config 00:02:40.990 dma/dpaa: not in enabled drivers build config 00:02:40.990 dma/dpaa2: not in enabled drivers build config 00:02:40.990 dma/hisilicon: not in enabled drivers build config 00:02:40.990 dma/idxd: not in enabled drivers build config 00:02:40.990 dma/ioat: not in enabled drivers build config 00:02:40.990 dma/skeleton: not in enabled drivers build config 00:02:40.990 net/af_packet: not in enabled drivers build config 00:02:40.990 net/af_xdp: not in enabled drivers build config 00:02:40.990 net/ark: not in enabled drivers build config 00:02:40.990 net/atlantic: not in enabled drivers build config 00:02:40.990 net/avp: not in enabled drivers build config 00:02:40.990 net/axgbe: not in enabled drivers build config 00:02:40.990 net/bnx2x: not in enabled drivers build config 00:02:40.990 net/bnxt: not in enabled drivers build config 00:02:40.990 net/bonding: not in enabled drivers build config 00:02:40.990 net/cnxk: not in enabled drivers build config 00:02:40.990 net/cpfl: not in enabled drivers build config 00:02:40.990 net/cxgbe: not in enabled drivers build config 00:02:40.990 net/dpaa: not in enabled drivers build config 00:02:40.990 net/dpaa2: not in enabled drivers build config 00:02:40.990 net/e1000: not in enabled drivers build config 00:02:40.990 net/ena: not in enabled drivers build config 00:02:40.990 net/enetc: not in enabled drivers build config 00:02:40.990 net/enetfec: not in enabled drivers build config 00:02:40.990 net/enic: not in enabled drivers build config 00:02:40.990 net/failsafe: not in enabled drivers build config 00:02:40.990 net/fm10k: not in enabled drivers build config 00:02:40.990 net/gve: not in enabled drivers build config 00:02:40.990 net/hinic: not in enabled drivers build config 00:02:40.990 net/hns3: not in enabled drivers build config 00:02:40.990 net/i40e: not in enabled drivers build config 00:02:40.990 net/iavf: not in enabled drivers build config 00:02:40.990 net/ice: not in enabled drivers build config 00:02:40.990 net/idpf: not in enabled drivers build config 00:02:40.990 net/igc: not in enabled drivers build config 00:02:40.990 net/ionic: not in enabled drivers build config 00:02:40.990 net/ipn3ke: not in enabled drivers build config 00:02:40.990 net/ixgbe: not in enabled drivers build config 00:02:40.990 net/mana: not in enabled drivers build config 00:02:40.990 net/memif: not in enabled drivers build config 00:02:40.990 net/mlx4: not in enabled drivers build config 00:02:40.990 net/mlx5: not in enabled drivers build config 00:02:40.990 net/mvneta: not in enabled drivers build config 00:02:40.990 net/mvpp2: not in enabled drivers build config 00:02:40.990 net/netvsc: not in enabled drivers build config 00:02:40.990 net/nfb: not in enabled drivers build config 00:02:40.990 net/nfp: not in enabled drivers build config 00:02:40.990 net/ngbe: not in enabled drivers build config 00:02:40.990 net/null: not in enabled drivers build config 00:02:40.990 net/octeontx: not in enabled drivers build config 00:02:40.990 net/octeon_ep: not in enabled drivers build config 00:02:40.990 net/pcap: not in enabled drivers build config 00:02:40.990 net/pfe: not in enabled drivers build config 00:02:40.990 net/qede: not in enabled drivers build config 00:02:40.990 net/ring: not in enabled drivers build config 00:02:40.990 net/sfc: not in enabled drivers build config 00:02:40.990 net/softnic: not in enabled drivers build config 00:02:40.990 net/tap: not in enabled drivers build config 00:02:40.990 net/thunderx: not in enabled drivers build config 00:02:40.990 net/txgbe: not in enabled drivers build config 00:02:40.990 net/vdev_netvsc: not in enabled drivers build config 00:02:40.990 net/vhost: not in enabled drivers build config 00:02:40.990 net/virtio: not in enabled drivers build config 00:02:40.990 net/vmxnet3: not in enabled drivers build config 00:02:40.990 raw/*: missing internal dependency, "rawdev" 00:02:40.990 crypto/armv8: not in enabled drivers build config 00:02:40.990 crypto/bcmfs: not in enabled drivers build config 00:02:40.990 crypto/caam_jr: not in enabled drivers build config 00:02:40.990 crypto/ccp: not in enabled drivers build config 00:02:40.990 crypto/cnxk: not in enabled drivers build config 00:02:40.990 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.990 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.990 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.990 crypto/mlx5: not in enabled drivers build config 00:02:40.990 crypto/mvsam: not in enabled drivers build config 00:02:40.990 crypto/nitrox: not in enabled drivers build config 00:02:40.990 crypto/null: not in enabled drivers build config 00:02:40.990 crypto/octeontx: not in enabled drivers build config 00:02:40.990 crypto/openssl: not in enabled drivers build config 00:02:40.990 crypto/scheduler: not in enabled drivers build config 00:02:40.990 crypto/uadk: not in enabled drivers build config 00:02:40.990 crypto/virtio: not in enabled drivers build config 00:02:40.990 compress/isal: not in enabled drivers build config 00:02:40.990 compress/mlx5: not in enabled drivers build config 00:02:40.990 compress/nitrox: not in enabled drivers build config 00:02:40.990 compress/octeontx: not in enabled drivers build config 00:02:40.990 compress/zlib: not in enabled drivers build config 00:02:40.990 regex/*: missing internal dependency, "regexdev" 00:02:40.990 ml/*: missing internal dependency, "mldev" 00:02:40.990 vdpa/ifc: not in enabled drivers build config 00:02:40.990 vdpa/mlx5: not in enabled drivers build config 00:02:40.990 vdpa/nfp: not in enabled drivers build config 00:02:40.990 vdpa/sfc: not in enabled drivers build config 00:02:40.990 event/*: missing internal dependency, "eventdev" 00:02:40.990 baseband/*: missing internal dependency, "bbdev" 00:02:40.990 gpu/*: missing internal dependency, "gpudev" 00:02:40.990 00:02:40.990 00:02:40.990 Build targets in project: 84 00:02:40.990 00:02:40.990 DPDK 24.03.0 00:02:40.990 00:02:40.990 User defined options 00:02:40.990 buildtype : debug 00:02:40.990 default_library : shared 00:02:40.990 libdir : lib 00:02:40.990 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:40.990 b_sanitize : address 00:02:40.990 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:40.990 c_link_args : 00:02:40.990 cpu_instruction_set: native 00:02:40.990 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:40.990 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:40.990 enable_docs : false 00:02:40.990 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:40.990 enable_kmods : false 00:02:40.990 max_lcores : 128 00:02:40.990 tests : false 00:02:40.990 00:02:40.990 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.990 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:40.990 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.990 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.990 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.990 [4/267] Linking static target lib/librte_kvargs.a 00:02:40.990 [5/267] Linking static target lib/librte_log.a 00:02:40.990 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.247 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.247 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.505 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.505 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.505 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.505 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.505 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.505 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.505 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.505 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.505 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.505 [18/267] Linking static target lib/librte_telemetry.a 00:02:41.763 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.763 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.763 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.763 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.763 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.763 [24/267] Linking target lib/librte_log.so.24.1 00:02:42.021 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.021 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.021 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.021 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.021 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.021 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:42.021 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.021 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:42.279 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.279 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.279 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:42.279 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:42.279 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.279 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.279 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.279 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.279 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.279 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.279 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.279 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.537 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:42.537 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.537 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.537 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.538 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.795 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.795 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.795 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.795 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.795 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.795 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.795 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.053 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.053 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.053 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.053 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.053 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.053 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.053 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.311 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.311 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.311 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.311 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.311 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.569 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.569 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.569 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.569 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.569 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.569 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.569 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.827 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.827 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.827 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.827 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.827 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.827 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.827 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.085 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.085 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.085 [85/267] Linking static target lib/librte_eal.a 00:02:44.085 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.085 [87/267] Linking static target lib/librte_ring.a 00:02:44.085 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.343 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.343 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.343 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.343 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.343 [93/267] Linking static target lib/librte_rcu.a 00:02:44.343 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.343 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.343 [96/267] Linking static target lib/librte_mempool.a 00:02:44.601 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.601 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.601 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.601 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.601 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.859 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.859 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.859 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.859 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.859 [106/267] Linking static target lib/librte_mbuf.a 00:02:44.859 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.859 [108/267] Linking static target lib/librte_net.a 00:02:44.859 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.859 [110/267] Linking static target lib/librte_meter.a 00:02:45.118 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.118 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.118 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.118 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.118 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.118 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.375 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.375 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.375 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.632 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.632 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.632 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.632 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.632 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.632 [125/267] Linking static target lib/librte_pci.a 00:02:45.889 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.889 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.889 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.889 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.889 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.889 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.889 [132/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.147 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.147 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:46.147 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.147 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.147 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.147 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.147 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.147 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.147 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.147 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.147 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:46.147 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.147 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.404 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.404 [147/267] Linking static target lib/librte_cmdline.a 00:02:46.404 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.404 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.404 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.662 [151/267] Linking static target lib/librte_timer.a 00:02:46.662 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.662 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.662 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.662 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.662 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.662 [157/267] Linking static target lib/librte_compressdev.a 00:02:46.919 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.919 [159/267] Linking static target lib/librte_hash.a 00:02:46.919 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.919 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.919 [162/267] Linking static target lib/librte_ethdev.a 00:02:46.919 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.919 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.919 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.177 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.177 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.177 [168/267] Linking static target lib/librte_dmadev.a 00:02:47.434 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:47.434 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.434 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.434 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.434 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.434 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.690 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.690 [176/267] Linking static target lib/librte_cryptodev.a 00:02:47.690 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.690 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.690 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.690 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.690 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.690 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.690 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.946 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.946 [185/267] Linking static target lib/librte_power.a 00:02:47.946 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.203 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.203 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.203 [189/267] Linking static target lib/librte_reorder.a 00:02:48.203 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.203 [191/267] Linking static target lib/librte_security.a 00:02:48.203 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.460 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.717 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.717 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.717 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.717 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.974 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.974 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.974 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.974 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.974 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.230 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.230 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.230 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.230 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.230 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.488 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.488 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.488 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.488 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.488 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.488 [213/267] Linking static target drivers/librte_bus_vdev.a 00:02:49.488 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.488 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.488 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.488 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.488 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:49.488 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.488 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.746 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.746 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.746 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.746 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:49.746 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.004 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.262 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.195 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.195 [229/267] Linking target lib/librte_eal.so.24.1 00:02:51.453 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.453 [231/267] Linking target lib/librte_ring.so.24.1 00:02:51.453 [232/267] Linking target lib/librte_timer.so.24.1 00:02:51.453 [233/267] Linking target lib/librte_pci.so.24.1 00:02:51.453 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:51.453 [235/267] Linking target lib/librte_meter.so.24.1 00:02:51.453 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.453 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.453 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.453 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.453 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.453 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:51.453 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:51.453 [243/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.453 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.711 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.711 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.711 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:51.711 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.711 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.711 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:51.711 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:51.711 [252/267] Linking target lib/librte_net.so.24.1 00:02:51.711 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:51.969 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.969 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.969 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:51.969 [257/267] Linking target lib/librte_hash.so.24.1 00:02:51.969 [258/267] Linking target lib/librte_security.so.24.1 00:02:51.969 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.227 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.227 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:52.227 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.519 [263/267] Linking target lib/librte_power.so.24.1 00:02:53.480 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.480 [265/267] Linking static target lib/librte_vhost.a 00:02:54.415 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.415 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:54.415 INFO: autodetecting backend as ninja 00:02:54.415 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:06.614 CC lib/log/log.o 00:03:06.614 CC lib/log/log_deprecated.o 00:03:06.614 CC lib/ut_mock/mock.o 00:03:06.614 CC lib/log/log_flags.o 00:03:06.614 CC lib/ut/ut.o 00:03:06.614 LIB libspdk_log.a 00:03:06.614 LIB libspdk_ut_mock.a 00:03:06.614 LIB libspdk_ut.a 00:03:06.614 SO libspdk_ut_mock.so.6.0 00:03:06.614 SO libspdk_log.so.7.1 00:03:06.614 SO libspdk_ut.so.2.0 00:03:06.614 SYMLINK libspdk_ut_mock.so 00:03:06.614 SYMLINK libspdk_log.so 00:03:06.614 SYMLINK libspdk_ut.so 00:03:06.614 CC lib/ioat/ioat.o 00:03:06.614 CC lib/util/bit_array.o 00:03:06.614 CC lib/util/base64.o 00:03:06.614 CC lib/util/cpuset.o 00:03:06.614 CC lib/util/crc16.o 00:03:06.614 CC lib/util/crc32.o 00:03:06.614 CXX lib/trace_parser/trace.o 00:03:06.614 CC lib/util/crc32c.o 00:03:06.614 CC lib/dma/dma.o 00:03:06.872 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.872 CC lib/util/crc32_ieee.o 00:03:06.872 CC lib/vfio_user/host/vfio_user.o 00:03:06.872 CC lib/util/crc64.o 00:03:06.872 LIB libspdk_dma.a 00:03:06.872 SO libspdk_dma.so.5.0 00:03:06.872 CC lib/util/dif.o 00:03:06.872 CC lib/util/fd.o 00:03:06.872 SYMLINK libspdk_dma.so 00:03:06.872 CC lib/util/fd_group.o 00:03:06.872 CC lib/util/file.o 00:03:06.872 CC lib/util/hexlify.o 00:03:06.872 CC lib/util/iov.o 00:03:06.872 LIB libspdk_ioat.a 00:03:06.872 CC lib/util/math.o 00:03:06.872 SO libspdk_ioat.so.7.0 00:03:06.872 CC lib/util/net.o 00:03:06.872 CC lib/util/pipe.o 00:03:07.131 LIB libspdk_vfio_user.a 00:03:07.131 SYMLINK libspdk_ioat.so 00:03:07.131 CC lib/util/strerror_tls.o 00:03:07.131 CC lib/util/string.o 00:03:07.131 SO libspdk_vfio_user.so.5.0 00:03:07.131 CC lib/util/uuid.o 00:03:07.131 CC lib/util/xor.o 00:03:07.131 SYMLINK libspdk_vfio_user.so 00:03:07.131 CC lib/util/zipf.o 00:03:07.131 CC lib/util/md5.o 00:03:07.389 LIB libspdk_util.a 00:03:07.647 LIB libspdk_trace_parser.a 00:03:07.647 SO libspdk_util.so.10.1 00:03:07.647 SO libspdk_trace_parser.so.6.0 00:03:07.647 SYMLINK libspdk_util.so 00:03:07.647 SYMLINK libspdk_trace_parser.so 00:03:07.905 CC lib/vmd/vmd.o 00:03:07.905 CC lib/vmd/led.o 00:03:07.905 CC lib/conf/conf.o 00:03:07.905 CC lib/json/json_parse.o 00:03:07.905 CC lib/json/json_util.o 00:03:07.905 CC lib/json/json_write.o 00:03:07.905 CC lib/rdma_utils/rdma_utils.o 00:03:07.905 CC lib/idxd/idxd.o 00:03:07.905 CC lib/idxd/idxd_user.o 00:03:07.905 CC lib/env_dpdk/env.o 00:03:07.905 CC lib/env_dpdk/memory.o 00:03:07.905 CC lib/env_dpdk/pci.o 00:03:07.905 CC lib/env_dpdk/init.o 00:03:07.905 LIB libspdk_conf.a 00:03:07.905 SO libspdk_conf.so.6.0 00:03:08.164 CC lib/idxd/idxd_kernel.o 00:03:08.164 LIB libspdk_rdma_utils.a 00:03:08.164 LIB libspdk_json.a 00:03:08.164 SYMLINK libspdk_conf.so 00:03:08.164 SO libspdk_rdma_utils.so.1.0 00:03:08.164 SO libspdk_json.so.6.0 00:03:08.164 CC lib/env_dpdk/threads.o 00:03:08.164 SYMLINK libspdk_rdma_utils.so 00:03:08.164 CC lib/env_dpdk/pci_ioat.o 00:03:08.164 SYMLINK libspdk_json.so 00:03:08.164 CC lib/env_dpdk/pci_virtio.o 00:03:08.164 CC lib/env_dpdk/pci_vmd.o 00:03:08.164 CC lib/env_dpdk/pci_idxd.o 00:03:08.164 CC lib/env_dpdk/pci_event.o 00:03:08.164 CC lib/env_dpdk/sigbus_handler.o 00:03:08.422 CC lib/env_dpdk/pci_dpdk.o 00:03:08.422 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.422 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.422 CC lib/rdma_provider/common.o 00:03:08.422 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.422 LIB libspdk_idxd.a 00:03:08.422 LIB libspdk_vmd.a 00:03:08.422 SO libspdk_idxd.so.12.1 00:03:08.422 SO libspdk_vmd.so.6.0 00:03:08.422 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.422 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.422 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.422 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.422 SYMLINK libspdk_idxd.so 00:03:08.422 SYMLINK libspdk_vmd.so 00:03:08.680 LIB libspdk_rdma_provider.a 00:03:08.681 SO libspdk_rdma_provider.so.7.0 00:03:08.681 SYMLINK libspdk_rdma_provider.so 00:03:08.681 LIB libspdk_jsonrpc.a 00:03:08.681 SO libspdk_jsonrpc.so.6.0 00:03:08.939 SYMLINK libspdk_jsonrpc.so 00:03:09.197 CC lib/rpc/rpc.o 00:03:09.198 LIB libspdk_env_dpdk.a 00:03:09.198 SO libspdk_env_dpdk.so.15.1 00:03:09.198 LIB libspdk_rpc.a 00:03:09.198 SO libspdk_rpc.so.6.0 00:03:09.456 SYMLINK libspdk_rpc.so 00:03:09.456 SYMLINK libspdk_env_dpdk.so 00:03:09.456 CC lib/trace/trace_flags.o 00:03:09.456 CC lib/trace/trace_rpc.o 00:03:09.456 CC lib/trace/trace.o 00:03:09.456 CC lib/keyring/keyring.o 00:03:09.456 CC lib/keyring/keyring_rpc.o 00:03:09.456 CC lib/notify/notify.o 00:03:09.456 CC lib/notify/notify_rpc.o 00:03:09.715 LIB libspdk_notify.a 00:03:09.715 SO libspdk_notify.so.6.0 00:03:09.715 LIB libspdk_keyring.a 00:03:09.715 SYMLINK libspdk_notify.so 00:03:09.715 LIB libspdk_trace.a 00:03:09.715 SO libspdk_keyring.so.2.0 00:03:09.715 SO libspdk_trace.so.11.0 00:03:09.715 SYMLINK libspdk_keyring.so 00:03:09.973 SYMLINK libspdk_trace.so 00:03:09.973 CC lib/sock/sock_rpc.o 00:03:09.973 CC lib/sock/sock.o 00:03:09.973 CC lib/thread/iobuf.o 00:03:09.973 CC lib/thread/thread.o 00:03:10.540 LIB libspdk_sock.a 00:03:10.540 SO libspdk_sock.so.10.0 00:03:10.540 SYMLINK libspdk_sock.so 00:03:10.801 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.801 CC lib/nvme/nvme_ctrlr.o 00:03:10.801 CC lib/nvme/nvme_ns.o 00:03:10.801 CC lib/nvme/nvme_fabric.o 00:03:10.801 CC lib/nvme/nvme_qpair.o 00:03:10.801 CC lib/nvme/nvme.o 00:03:10.801 CC lib/nvme/nvme_ns_cmd.o 00:03:10.801 CC lib/nvme/nvme_pcie.o 00:03:10.801 CC lib/nvme/nvme_pcie_common.o 00:03:11.371 CC lib/nvme/nvme_quirks.o 00:03:11.371 CC lib/nvme/nvme_transport.o 00:03:11.371 CC lib/nvme/nvme_discovery.o 00:03:11.371 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.629 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.629 CC lib/nvme/nvme_tcp.o 00:03:11.629 LIB libspdk_thread.a 00:03:11.629 CC lib/nvme/nvme_opal.o 00:03:11.629 SO libspdk_thread.so.11.0 00:03:11.629 SYMLINK libspdk_thread.so 00:03:11.629 CC lib/nvme/nvme_io_msg.o 00:03:11.887 CC lib/nvme/nvme_poll_group.o 00:03:11.887 CC lib/nvme/nvme_zns.o 00:03:11.887 CC lib/nvme/nvme_stubs.o 00:03:12.145 CC lib/accel/accel.o 00:03:12.145 CC lib/nvme/nvme_auth.o 00:03:12.145 CC lib/accel/accel_rpc.o 00:03:12.145 CC lib/accel/accel_sw.o 00:03:12.145 CC lib/nvme/nvme_cuse.o 00:03:12.145 CC lib/nvme/nvme_rdma.o 00:03:12.404 CC lib/blob/blobstore.o 00:03:12.662 CC lib/init/json_config.o 00:03:12.662 CC lib/virtio/virtio.o 00:03:12.662 CC lib/fsdev/fsdev.o 00:03:12.662 CC lib/init/subsystem.o 00:03:12.662 CC lib/init/subsystem_rpc.o 00:03:12.921 CC lib/virtio/virtio_vhost_user.o 00:03:12.921 CC lib/init/rpc.o 00:03:12.921 CC lib/blob/request.o 00:03:12.921 CC lib/blob/zeroes.o 00:03:12.921 CC lib/virtio/virtio_vfio_user.o 00:03:12.921 CC lib/virtio/virtio_pci.o 00:03:12.921 LIB libspdk_init.a 00:03:13.179 SO libspdk_init.so.6.0 00:03:13.179 CC lib/blob/blob_bs_dev.o 00:03:13.179 LIB libspdk_accel.a 00:03:13.179 SO libspdk_accel.so.16.0 00:03:13.179 SYMLINK libspdk_init.so 00:03:13.179 CC lib/fsdev/fsdev_io.o 00:03:13.179 CC lib/fsdev/fsdev_rpc.o 00:03:13.179 SYMLINK libspdk_accel.so 00:03:13.438 LIB libspdk_virtio.a 00:03:13.438 CC lib/event/reactor.o 00:03:13.438 CC lib/event/app.o 00:03:13.438 CC lib/event/app_rpc.o 00:03:13.438 CC lib/bdev/bdev.o 00:03:13.438 CC lib/bdev/bdev_rpc.o 00:03:13.438 CC lib/event/log_rpc.o 00:03:13.438 SO libspdk_virtio.so.7.0 00:03:13.438 SYMLINK libspdk_virtio.so 00:03:13.438 CC lib/bdev/bdev_zone.o 00:03:13.438 CC lib/event/scheduler_static.o 00:03:13.438 LIB libspdk_fsdev.a 00:03:13.438 SO libspdk_fsdev.so.2.0 00:03:13.438 LIB libspdk_nvme.a 00:03:13.438 CC lib/bdev/part.o 00:03:13.696 SYMLINK libspdk_fsdev.so 00:03:13.696 CC lib/bdev/scsi_nvme.o 00:03:13.696 SO libspdk_nvme.so.15.0 00:03:13.696 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.696 LIB libspdk_event.a 00:03:13.696 SO libspdk_event.so.14.0 00:03:13.954 SYMLINK libspdk_event.so 00:03:13.954 SYMLINK libspdk_nvme.so 00:03:14.520 LIB libspdk_fuse_dispatcher.a 00:03:14.520 SO libspdk_fuse_dispatcher.so.1.0 00:03:14.520 SYMLINK libspdk_fuse_dispatcher.so 00:03:15.084 LIB libspdk_blob.a 00:03:15.084 SO libspdk_blob.so.12.0 00:03:15.341 SYMLINK libspdk_blob.so 00:03:15.600 LIB libspdk_bdev.a 00:03:15.600 CC lib/blobfs/blobfs.o 00:03:15.600 CC lib/blobfs/tree.o 00:03:15.600 CC lib/lvol/lvol.o 00:03:15.600 SO libspdk_bdev.so.17.0 00:03:15.600 SYMLINK libspdk_bdev.so 00:03:15.906 CC lib/nvmf/ctrlr_discovery.o 00:03:15.906 CC lib/nvmf/ctrlr_bdev.o 00:03:15.906 CC lib/nvmf/subsystem.o 00:03:15.906 CC lib/nbd/nbd.o 00:03:15.906 CC lib/nvmf/ctrlr.o 00:03:15.906 CC lib/ftl/ftl_core.o 00:03:15.906 CC lib/ublk/ublk.o 00:03:15.906 CC lib/scsi/dev.o 00:03:15.906 CC lib/scsi/lun.o 00:03:16.165 CC lib/nbd/nbd_rpc.o 00:03:16.165 CC lib/nvmf/nvmf.o 00:03:16.165 CC lib/ftl/ftl_init.o 00:03:16.165 LIB libspdk_nbd.a 00:03:16.165 SO libspdk_nbd.so.7.0 00:03:16.165 SYMLINK libspdk_nbd.so 00:03:16.165 CC lib/nvmf/nvmf_rpc.o 00:03:16.165 CC lib/ublk/ublk_rpc.o 00:03:16.165 CC lib/scsi/port.o 00:03:16.424 LIB libspdk_blobfs.a 00:03:16.424 CC lib/ftl/ftl_layout.o 00:03:16.424 SO libspdk_blobfs.so.11.0 00:03:16.424 SYMLINK libspdk_blobfs.so 00:03:16.424 CC lib/ftl/ftl_debug.o 00:03:16.424 CC lib/scsi/scsi.o 00:03:16.424 LIB libspdk_ublk.a 00:03:16.424 LIB libspdk_lvol.a 00:03:16.424 SO libspdk_ublk.so.3.0 00:03:16.424 SO libspdk_lvol.so.11.0 00:03:16.424 CC lib/scsi/scsi_bdev.o 00:03:16.424 SYMLINK libspdk_lvol.so 00:03:16.424 SYMLINK libspdk_ublk.so 00:03:16.424 CC lib/scsi/scsi_pr.o 00:03:16.424 CC lib/ftl/ftl_io.o 00:03:16.424 CC lib/scsi/scsi_rpc.o 00:03:16.683 CC lib/scsi/task.o 00:03:16.683 CC lib/ftl/ftl_sb.o 00:03:16.683 CC lib/ftl/ftl_l2p.o 00:03:16.683 CC lib/ftl/ftl_l2p_flat.o 00:03:16.683 CC lib/ftl/ftl_nv_cache.o 00:03:16.683 CC lib/nvmf/transport.o 00:03:16.683 CC lib/ftl/ftl_band.o 00:03:16.683 CC lib/ftl/ftl_band_ops.o 00:03:16.942 CC lib/ftl/ftl_writer.o 00:03:16.942 CC lib/nvmf/tcp.o 00:03:16.942 CC lib/nvmf/stubs.o 00:03:16.942 LIB libspdk_scsi.a 00:03:16.942 SO libspdk_scsi.so.9.0 00:03:16.942 CC lib/ftl/ftl_rq.o 00:03:16.942 SYMLINK libspdk_scsi.so 00:03:16.942 CC lib/ftl/ftl_reloc.o 00:03:16.942 CC lib/ftl/ftl_l2p_cache.o 00:03:17.206 CC lib/nvmf/mdns_server.o 00:03:17.206 CC lib/iscsi/conn.o 00:03:17.206 CC lib/iscsi/init_grp.o 00:03:17.206 CC lib/ftl/ftl_p2l.o 00:03:17.206 CC lib/vhost/vhost.o 00:03:17.206 CC lib/vhost/vhost_rpc.o 00:03:17.467 CC lib/iscsi/iscsi.o 00:03:17.467 CC lib/iscsi/param.o 00:03:17.467 CC lib/iscsi/portal_grp.o 00:03:17.726 CC lib/iscsi/tgt_node.o 00:03:17.726 CC lib/iscsi/iscsi_subsystem.o 00:03:17.726 CC lib/ftl/ftl_p2l_log.o 00:03:17.726 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.726 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.726 CC lib/vhost/vhost_scsi.o 00:03:17.985 CC lib/vhost/vhost_blk.o 00:03:17.985 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.985 CC lib/iscsi/iscsi_rpc.o 00:03:17.985 CC lib/iscsi/task.o 00:03:17.985 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.985 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.985 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.985 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.243 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.243 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.243 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.243 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.243 CC lib/nvmf/rdma.o 00:03:18.243 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.243 CC lib/vhost/rte_vhost_user.o 00:03:18.502 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.502 CC lib/ftl/utils/ftl_conf.o 00:03:18.502 CC lib/ftl/utils/ftl_md.o 00:03:18.502 CC lib/ftl/utils/ftl_mempool.o 00:03:18.502 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.502 CC lib/nvmf/auth.o 00:03:18.502 CC lib/ftl/utils/ftl_property.o 00:03:18.502 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.502 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.762 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.762 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.021 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.021 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.021 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:19.021 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:19.021 LIB libspdk_iscsi.a 00:03:19.021 CC lib/ftl/base/ftl_base_dev.o 00:03:19.021 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.021 SO libspdk_iscsi.so.8.0 00:03:19.021 LIB libspdk_vhost.a 00:03:19.021 CC lib/ftl/ftl_trace.o 00:03:19.021 SO libspdk_vhost.so.8.0 00:03:19.279 SYMLINK libspdk_iscsi.so 00:03:19.279 SYMLINK libspdk_vhost.so 00:03:19.279 LIB libspdk_ftl.a 00:03:19.538 SO libspdk_ftl.so.9.0 00:03:19.538 SYMLINK libspdk_ftl.so 00:03:20.105 LIB libspdk_nvmf.a 00:03:20.105 SO libspdk_nvmf.so.20.0 00:03:20.364 SYMLINK libspdk_nvmf.so 00:03:20.623 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.623 CC module/accel/iaa/accel_iaa.o 00:03:20.623 CC module/sock/posix/posix.o 00:03:20.623 CC module/accel/dsa/accel_dsa.o 00:03:20.623 CC module/keyring/file/keyring.o 00:03:20.623 CC module/accel/error/accel_error.o 00:03:20.623 CC module/fsdev/aio/fsdev_aio.o 00:03:20.623 CC module/accel/ioat/accel_ioat.o 00:03:20.623 CC module/blob/bdev/blob_bdev.o 00:03:20.623 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.623 LIB libspdk_env_dpdk_rpc.a 00:03:20.623 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.623 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.623 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:20.882 CC module/keyring/file/keyring_rpc.o 00:03:20.882 CC module/accel/error/accel_error_rpc.o 00:03:20.882 LIB libspdk_scheduler_dynamic.a 00:03:20.882 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.882 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.882 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.882 LIB libspdk_keyring_file.a 00:03:20.882 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.882 LIB libspdk_accel_error.a 00:03:20.882 SO libspdk_keyring_file.so.2.0 00:03:20.882 SO libspdk_accel_error.so.2.0 00:03:20.882 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.882 LIB libspdk_blob_bdev.a 00:03:20.882 LIB libspdk_accel_iaa.a 00:03:20.882 LIB libspdk_accel_ioat.a 00:03:20.882 SO libspdk_blob_bdev.so.12.0 00:03:20.882 SO libspdk_accel_iaa.so.3.0 00:03:20.882 SO libspdk_accel_ioat.so.6.0 00:03:20.882 SYMLINK libspdk_keyring_file.so 00:03:20.882 SYMLINK libspdk_accel_error.so 00:03:20.882 SYMLINK libspdk_blob_bdev.so 00:03:20.882 CC module/fsdev/aio/linux_aio_mgr.o 00:03:20.882 SYMLINK libspdk_accel_iaa.so 00:03:20.882 CC module/keyring/linux/keyring.o 00:03:21.141 SYMLINK libspdk_accel_ioat.so 00:03:21.141 CC module/keyring/linux/keyring_rpc.o 00:03:21.141 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.141 LIB libspdk_accel_dsa.a 00:03:21.141 SO libspdk_accel_dsa.so.5.0 00:03:21.141 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.141 SYMLINK libspdk_accel_dsa.so 00:03:21.141 LIB libspdk_keyring_linux.a 00:03:21.141 SO libspdk_keyring_linux.so.1.0 00:03:21.141 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.141 CC module/bdev/delay/vbdev_delay.o 00:03:21.141 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.141 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:21.141 LIB libspdk_sock_posix.a 00:03:21.141 SYMLINK libspdk_keyring_linux.so 00:03:21.141 SO libspdk_sock_posix.so.6.0 00:03:21.141 CC module/bdev/error/vbdev_error.o 00:03:21.141 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.399 LIB libspdk_scheduler_gscheduler.a 00:03:21.399 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.399 CC module/bdev/gpt/gpt.o 00:03:21.399 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.399 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.399 SYMLINK libspdk_sock_posix.so 00:03:21.399 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.399 LIB libspdk_fsdev_aio.a 00:03:21.399 CC module/bdev/malloc/bdev_malloc.o 00:03:21.399 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.399 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.399 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.399 SO libspdk_fsdev_aio.so.1.0 00:03:21.399 LIB libspdk_blobfs_bdev.a 00:03:21.399 SYMLINK libspdk_fsdev_aio.so 00:03:21.399 SO libspdk_blobfs_bdev.so.6.0 00:03:21.399 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.399 LIB libspdk_bdev_delay.a 00:03:21.399 SYMLINK libspdk_blobfs_bdev.so 00:03:21.399 SO libspdk_bdev_delay.so.6.0 00:03:21.658 CC module/bdev/null/bdev_null.o 00:03:21.658 SYMLINK libspdk_bdev_delay.so 00:03:21.658 CC module/bdev/nvme/bdev_nvme.o 00:03:21.658 LIB libspdk_bdev_gpt.a 00:03:21.658 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.658 SO libspdk_bdev_gpt.so.6.0 00:03:21.658 LIB libspdk_bdev_error.a 00:03:21.658 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.658 CC module/bdev/raid/bdev_raid.o 00:03:21.658 SO libspdk_bdev_error.so.6.0 00:03:21.658 SYMLINK libspdk_bdev_gpt.so 00:03:21.658 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.658 CC module/bdev/split/vbdev_split.o 00:03:21.658 SYMLINK libspdk_bdev_error.so 00:03:21.658 CC module/bdev/null/bdev_null_rpc.o 00:03:21.658 LIB libspdk_bdev_malloc.a 00:03:21.916 SO libspdk_bdev_malloc.so.6.0 00:03:21.916 LIB libspdk_bdev_lvol.a 00:03:21.916 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.916 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.916 CC module/bdev/xnvme/bdev_xnvme.o 00:03:21.916 SO libspdk_bdev_lvol.so.6.0 00:03:21.916 LIB libspdk_bdev_passthru.a 00:03:21.916 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.916 SYMLINK libspdk_bdev_malloc.so 00:03:21.916 SO libspdk_bdev_passthru.so.6.0 00:03:21.916 SYMLINK libspdk_bdev_lvol.so 00:03:21.916 SYMLINK libspdk_bdev_passthru.so 00:03:21.916 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:21.916 LIB libspdk_bdev_null.a 00:03:21.916 SO libspdk_bdev_null.so.6.0 00:03:21.916 LIB libspdk_bdev_split.a 00:03:21.916 CC module/bdev/aio/bdev_aio.o 00:03:21.916 SO libspdk_bdev_split.so.6.0 00:03:21.916 SYMLINK libspdk_bdev_null.so 00:03:21.916 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.916 CC module/bdev/ftl/bdev_ftl.o 00:03:21.916 SYMLINK libspdk_bdev_split.so 00:03:21.916 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.916 LIB libspdk_bdev_xnvme.a 00:03:22.175 LIB libspdk_bdev_zone_block.a 00:03:22.175 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.175 SO libspdk_bdev_zone_block.so.6.0 00:03:22.175 SO libspdk_bdev_xnvme.so.3.0 00:03:22.175 SYMLINK libspdk_bdev_zone_block.so 00:03:22.175 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.175 SYMLINK libspdk_bdev_xnvme.so 00:03:22.175 CC module/bdev/raid/raid0.o 00:03:22.175 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.175 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.175 LIB libspdk_bdev_aio.a 00:03:22.175 CC module/bdev/raid/raid1.o 00:03:22.175 CC module/bdev/raid/concat.o 00:03:22.175 SO libspdk_bdev_aio.so.6.0 00:03:22.434 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.434 LIB libspdk_bdev_ftl.a 00:03:22.434 SYMLINK libspdk_bdev_aio.so 00:03:22.434 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.434 CC module/bdev/nvme/nvme_rpc.o 00:03:22.434 SO libspdk_bdev_ftl.so.6.0 00:03:22.434 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.434 SYMLINK libspdk_bdev_ftl.so 00:03:22.434 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.434 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.434 LIB libspdk_bdev_iscsi.a 00:03:22.434 SO libspdk_bdev_iscsi.so.6.0 00:03:22.434 CC module/bdev/nvme/vbdev_opal.o 00:03:22.434 SYMLINK libspdk_bdev_iscsi.so 00:03:22.434 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.693 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.693 LIB libspdk_bdev_virtio.a 00:03:22.693 LIB libspdk_bdev_raid.a 00:03:22.693 SO libspdk_bdev_virtio.so.6.0 00:03:22.693 SO libspdk_bdev_raid.so.6.0 00:03:22.693 SYMLINK libspdk_bdev_virtio.so 00:03:22.693 SYMLINK libspdk_bdev_raid.so 00:03:23.629 LIB libspdk_bdev_nvme.a 00:03:23.629 SO libspdk_bdev_nvme.so.7.1 00:03:23.888 SYMLINK libspdk_bdev_nvme.so 00:03:24.148 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.148 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.148 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.148 CC module/event/subsystems/fsdev/fsdev.o 00:03:24.148 CC module/event/subsystems/sock/sock.o 00:03:24.148 CC module/event/subsystems/keyring/keyring.o 00:03:24.148 CC module/event/subsystems/vmd/vmd.o 00:03:24.148 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.148 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.407 LIB libspdk_event_vhost_blk.a 00:03:24.407 LIB libspdk_event_vmd.a 00:03:24.407 LIB libspdk_event_keyring.a 00:03:24.407 LIB libspdk_event_fsdev.a 00:03:24.407 SO libspdk_event_vhost_blk.so.3.0 00:03:24.407 SO libspdk_event_keyring.so.1.0 00:03:24.407 LIB libspdk_event_iobuf.a 00:03:24.407 SO libspdk_event_vmd.so.6.0 00:03:24.407 SO libspdk_event_fsdev.so.1.0 00:03:24.407 LIB libspdk_event_sock.a 00:03:24.407 LIB libspdk_event_scheduler.a 00:03:24.407 SO libspdk_event_iobuf.so.3.0 00:03:24.407 SO libspdk_event_sock.so.5.0 00:03:24.407 SO libspdk_event_scheduler.so.4.0 00:03:24.407 SYMLINK libspdk_event_vhost_blk.so 00:03:24.407 SYMLINK libspdk_event_fsdev.so 00:03:24.407 SYMLINK libspdk_event_vmd.so 00:03:24.407 SYMLINK libspdk_event_keyring.so 00:03:24.407 SYMLINK libspdk_event_iobuf.so 00:03:24.407 SYMLINK libspdk_event_sock.so 00:03:24.407 SYMLINK libspdk_event_scheduler.so 00:03:24.668 CC module/event/subsystems/accel/accel.o 00:03:24.668 LIB libspdk_event_accel.a 00:03:24.668 SO libspdk_event_accel.so.6.0 00:03:24.668 SYMLINK libspdk_event_accel.so 00:03:24.928 CC module/event/subsystems/bdev/bdev.o 00:03:25.187 LIB libspdk_event_bdev.a 00:03:25.187 SO libspdk_event_bdev.so.6.0 00:03:25.187 SYMLINK libspdk_event_bdev.so 00:03:25.447 CC module/event/subsystems/ublk/ublk.o 00:03:25.447 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.447 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.447 CC module/event/subsystems/nbd/nbd.o 00:03:25.447 CC module/event/subsystems/scsi/scsi.o 00:03:25.447 LIB libspdk_event_ublk.a 00:03:25.447 LIB libspdk_event_nbd.a 00:03:25.447 SO libspdk_event_ublk.so.3.0 00:03:25.447 LIB libspdk_event_scsi.a 00:03:25.447 SO libspdk_event_nbd.so.6.0 00:03:25.447 SYMLINK libspdk_event_ublk.so 00:03:25.447 SO libspdk_event_scsi.so.6.0 00:03:25.447 SYMLINK libspdk_event_nbd.so 00:03:25.447 LIB libspdk_event_nvmf.a 00:03:25.707 SO libspdk_event_nvmf.so.6.0 00:03:25.707 SYMLINK libspdk_event_scsi.so 00:03:25.707 SYMLINK libspdk_event_nvmf.so 00:03:25.707 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:25.707 CC module/event/subsystems/iscsi/iscsi.o 00:03:25.969 LIB libspdk_event_vhost_scsi.a 00:03:25.969 LIB libspdk_event_iscsi.a 00:03:25.969 SO libspdk_event_vhost_scsi.so.3.0 00:03:25.969 SO libspdk_event_iscsi.so.6.0 00:03:25.969 SYMLINK libspdk_event_vhost_scsi.so 00:03:25.969 SYMLINK libspdk_event_iscsi.so 00:03:26.228 SO libspdk.so.6.0 00:03:26.228 SYMLINK libspdk.so 00:03:26.228 TEST_HEADER include/spdk/accel.h 00:03:26.228 TEST_HEADER include/spdk/accel_module.h 00:03:26.228 CC app/trace_record/trace_record.o 00:03:26.228 TEST_HEADER include/spdk/assert.h 00:03:26.228 TEST_HEADER include/spdk/barrier.h 00:03:26.228 CXX app/trace/trace.o 00:03:26.228 TEST_HEADER include/spdk/base64.h 00:03:26.228 TEST_HEADER include/spdk/bdev.h 00:03:26.228 TEST_HEADER include/spdk/bdev_module.h 00:03:26.228 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.228 TEST_HEADER include/spdk/bit_array.h 00:03:26.228 TEST_HEADER include/spdk/bit_pool.h 00:03:26.228 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.228 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.228 TEST_HEADER include/spdk/blobfs.h 00:03:26.228 TEST_HEADER include/spdk/blob.h 00:03:26.228 TEST_HEADER include/spdk/conf.h 00:03:26.228 TEST_HEADER include/spdk/config.h 00:03:26.228 TEST_HEADER include/spdk/cpuset.h 00:03:26.228 TEST_HEADER include/spdk/crc16.h 00:03:26.228 TEST_HEADER include/spdk/crc32.h 00:03:26.228 TEST_HEADER include/spdk/crc64.h 00:03:26.228 TEST_HEADER include/spdk/dif.h 00:03:26.228 TEST_HEADER include/spdk/dma.h 00:03:26.228 TEST_HEADER include/spdk/endian.h 00:03:26.228 CC app/nvmf_tgt/nvmf_main.o 00:03:26.228 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.228 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:26.228 TEST_HEADER include/spdk/env.h 00:03:26.228 TEST_HEADER include/spdk/event.h 00:03:26.228 TEST_HEADER include/spdk/fd_group.h 00:03:26.228 TEST_HEADER include/spdk/fd.h 00:03:26.228 TEST_HEADER include/spdk/file.h 00:03:26.228 TEST_HEADER include/spdk/fsdev.h 00:03:26.228 TEST_HEADER include/spdk/fsdev_module.h 00:03:26.228 TEST_HEADER include/spdk/ftl.h 00:03:26.228 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:26.228 CC test/thread/poller_perf/poller_perf.o 00:03:26.228 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.228 TEST_HEADER include/spdk/hexlify.h 00:03:26.228 TEST_HEADER include/spdk/histogram_data.h 00:03:26.228 TEST_HEADER include/spdk/idxd.h 00:03:26.228 CC examples/util/zipf/zipf.o 00:03:26.228 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.228 CC examples/ioat/perf/perf.o 00:03:26.228 TEST_HEADER include/spdk/init.h 00:03:26.228 TEST_HEADER include/spdk/ioat.h 00:03:26.228 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.228 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.228 TEST_HEADER include/spdk/json.h 00:03:26.489 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.489 TEST_HEADER include/spdk/keyring.h 00:03:26.489 TEST_HEADER include/spdk/keyring_module.h 00:03:26.489 TEST_HEADER include/spdk/likely.h 00:03:26.489 TEST_HEADER include/spdk/log.h 00:03:26.489 TEST_HEADER include/spdk/lvol.h 00:03:26.489 TEST_HEADER include/spdk/md5.h 00:03:26.489 TEST_HEADER include/spdk/memory.h 00:03:26.489 TEST_HEADER include/spdk/mmio.h 00:03:26.489 TEST_HEADER include/spdk/nbd.h 00:03:26.489 TEST_HEADER include/spdk/net.h 00:03:26.489 TEST_HEADER include/spdk/notify.h 00:03:26.489 TEST_HEADER include/spdk/nvme.h 00:03:26.489 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.489 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.489 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.489 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.489 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.489 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.489 CC test/app/bdev_svc/bdev_svc.o 00:03:26.489 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.489 TEST_HEADER include/spdk/nvmf.h 00:03:26.489 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.489 CC test/dma/test_dma/test_dma.o 00:03:26.489 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.489 TEST_HEADER include/spdk/opal.h 00:03:26.489 TEST_HEADER include/spdk/opal_spec.h 00:03:26.489 TEST_HEADER include/spdk/pci_ids.h 00:03:26.489 TEST_HEADER include/spdk/pipe.h 00:03:26.489 TEST_HEADER include/spdk/queue.h 00:03:26.489 TEST_HEADER include/spdk/reduce.h 00:03:26.489 TEST_HEADER include/spdk/rpc.h 00:03:26.489 TEST_HEADER include/spdk/scheduler.h 00:03:26.489 TEST_HEADER include/spdk/scsi.h 00:03:26.489 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.489 TEST_HEADER include/spdk/sock.h 00:03:26.489 TEST_HEADER include/spdk/stdinc.h 00:03:26.489 TEST_HEADER include/spdk/string.h 00:03:26.489 TEST_HEADER include/spdk/thread.h 00:03:26.489 TEST_HEADER include/spdk/trace.h 00:03:26.489 TEST_HEADER include/spdk/trace_parser.h 00:03:26.489 TEST_HEADER include/spdk/tree.h 00:03:26.489 TEST_HEADER include/spdk/ublk.h 00:03:26.489 TEST_HEADER include/spdk/util.h 00:03:26.489 TEST_HEADER include/spdk/uuid.h 00:03:26.489 TEST_HEADER include/spdk/version.h 00:03:26.489 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.489 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.489 TEST_HEADER include/spdk/vhost.h 00:03:26.489 TEST_HEADER include/spdk/vmd.h 00:03:26.489 TEST_HEADER include/spdk/xor.h 00:03:26.489 TEST_HEADER include/spdk/zipf.h 00:03:26.489 CXX test/cpp_headers/accel.o 00:03:26.489 LINK poller_perf 00:03:26.489 LINK interrupt_tgt 00:03:26.489 LINK nvmf_tgt 00:03:26.489 LINK zipf 00:03:26.489 LINK spdk_trace_record 00:03:26.489 LINK bdev_svc 00:03:26.489 LINK ioat_perf 00:03:26.489 CXX test/cpp_headers/accel_module.o 00:03:26.489 CXX test/cpp_headers/assert.o 00:03:26.748 LINK spdk_trace 00:03:26.748 CC test/rpc_client/rpc_client_test.o 00:03:26.748 CC examples/ioat/verify/verify.o 00:03:26.748 CXX test/cpp_headers/barrier.o 00:03:26.748 CXX test/cpp_headers/base64.o 00:03:26.748 CC test/event/event_perf/event_perf.o 00:03:26.748 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.748 LINK test_dma 00:03:26.748 CC examples/thread/thread/thread_ex.o 00:03:26.748 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.007 LINK rpc_client_test 00:03:27.007 CXX test/cpp_headers/bdev.o 00:03:27.007 LINK verify 00:03:27.007 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.007 LINK event_perf 00:03:27.007 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:27.007 CXX test/cpp_headers/bdev_module.o 00:03:27.007 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:27.007 CC test/event/reactor/reactor.o 00:03:27.007 CC test/event/reactor_perf/reactor_perf.o 00:03:27.007 LINK iscsi_tgt 00:03:27.007 CC test/env/vtophys/vtophys.o 00:03:27.007 LINK thread 00:03:27.265 LINK nvme_fuzz 00:03:27.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:27.265 LINK reactor 00:03:27.265 LINK reactor_perf 00:03:27.265 CXX test/cpp_headers/bdev_zone.o 00:03:27.265 LINK vtophys 00:03:27.265 LINK mem_callbacks 00:03:27.265 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.523 CXX test/cpp_headers/bit_array.o 00:03:27.523 CC examples/sock/hello_world/hello_sock.o 00:03:27.523 CXX test/cpp_headers/bit_pool.o 00:03:27.523 CC test/event/app_repeat/app_repeat.o 00:03:27.523 CC app/spdk_tgt/spdk_tgt.o 00:03:27.523 CC test/accel/dif/dif.o 00:03:27.523 LINK env_dpdk_post_init 00:03:27.523 CC test/blobfs/mkfs/mkfs.o 00:03:27.523 CXX test/cpp_headers/blob_bdev.o 00:03:27.523 LINK app_repeat 00:03:27.523 LINK vhost_fuzz 00:03:27.523 LINK spdk_tgt 00:03:27.523 LINK hello_sock 00:03:27.782 LINK mkfs 00:03:27.782 CXX test/cpp_headers/blobfs_bdev.o 00:03:27.782 CC test/env/memory/memory_ut.o 00:03:27.782 CC test/lvol/esnap/esnap.o 00:03:27.782 CC test/event/scheduler/scheduler.o 00:03:27.782 CC app/spdk_lspci/spdk_lspci.o 00:03:27.782 CC test/nvme/aer/aer.o 00:03:27.782 CXX test/cpp_headers/blobfs.o 00:03:27.782 CC examples/vmd/lsvmd/lsvmd.o 00:03:28.041 CC test/nvme/reset/reset.o 00:03:28.041 LINK spdk_lspci 00:03:28.041 LINK scheduler 00:03:28.041 LINK lsvmd 00:03:28.041 CXX test/cpp_headers/blob.o 00:03:28.041 LINK dif 00:03:28.041 LINK reset 00:03:28.041 LINK aer 00:03:28.041 CC app/spdk_nvme_perf/perf.o 00:03:28.041 CC examples/vmd/led/led.o 00:03:28.041 CXX test/cpp_headers/conf.o 00:03:28.299 CC test/nvme/sgl/sgl.o 00:03:28.299 LINK iscsi_fuzz 00:03:28.299 CC test/nvme/e2edp/nvme_dp.o 00:03:28.299 LINK led 00:03:28.299 CXX test/cpp_headers/config.o 00:03:28.299 CC test/nvme/overhead/overhead.o 00:03:28.299 CXX test/cpp_headers/cpuset.o 00:03:28.299 LINK sgl 00:03:28.299 CC test/bdev/bdevio/bdevio.o 00:03:28.557 CC test/app/histogram_perf/histogram_perf.o 00:03:28.557 CXX test/cpp_headers/crc16.o 00:03:28.557 CXX test/cpp_headers/crc32.o 00:03:28.557 LINK nvme_dp 00:03:28.557 LINK histogram_perf 00:03:28.557 CC examples/idxd/perf/perf.o 00:03:28.557 LINK overhead 00:03:28.557 CC test/env/pci/pci_ut.o 00:03:28.815 CXX test/cpp_headers/crc64.o 00:03:28.815 CC test/app/jsoncat/jsoncat.o 00:03:28.815 CC test/nvme/err_injection/err_injection.o 00:03:28.815 CC test/nvme/startup/startup.o 00:03:28.815 LINK spdk_nvme_perf 00:03:28.815 CXX test/cpp_headers/dif.o 00:03:28.815 LINK bdevio 00:03:28.815 LINK memory_ut 00:03:28.815 LINK jsoncat 00:03:28.815 LINK startup 00:03:28.815 LINK idxd_perf 00:03:28.815 CXX test/cpp_headers/dma.o 00:03:28.815 LINK err_injection 00:03:29.073 CXX test/cpp_headers/endian.o 00:03:29.073 CC app/spdk_nvme_identify/identify.o 00:03:29.073 CXX test/cpp_headers/env_dpdk.o 00:03:29.073 CXX test/cpp_headers/env.o 00:03:29.073 LINK pci_ut 00:03:29.073 CC test/app/stub/stub.o 00:03:29.073 CXX test/cpp_headers/event.o 00:03:29.073 CXX test/cpp_headers/fd_group.o 00:03:29.073 CC test/nvme/reserve/reserve.o 00:03:29.073 CC examples/accel/perf/accel_perf.o 00:03:29.073 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:29.073 LINK stub 00:03:29.331 CXX test/cpp_headers/fd.o 00:03:29.331 CC examples/nvme/hello_world/hello_world.o 00:03:29.331 CC examples/nvme/reconnect/reconnect.o 00:03:29.331 CC examples/blob/hello_world/hello_blob.o 00:03:29.331 LINK reserve 00:03:29.331 LINK hello_fsdev 00:03:29.331 CXX test/cpp_headers/file.o 00:03:29.331 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:29.331 LINK hello_world 00:03:29.589 LINK hello_blob 00:03:29.589 CXX test/cpp_headers/fsdev.o 00:03:29.589 LINK accel_perf 00:03:29.589 CC test/nvme/simple_copy/simple_copy.o 00:03:29.589 CXX test/cpp_headers/fsdev_module.o 00:03:29.589 CXX test/cpp_headers/ftl.o 00:03:29.589 LINK spdk_nvme_identify 00:03:29.589 LINK reconnect 00:03:29.589 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.589 CXX test/cpp_headers/gpt_spec.o 00:03:29.589 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.589 LINK simple_copy 00:03:29.589 CXX test/cpp_headers/hexlify.o 00:03:29.847 CC examples/blob/cli/blobcli.o 00:03:29.847 CC app/spdk_top/spdk_top.o 00:03:29.847 CC test/nvme/connect_stress/connect_stress.o 00:03:29.847 LINK nvme_manage 00:03:29.847 CXX test/cpp_headers/histogram_data.o 00:03:29.847 CC test/nvme/boot_partition/boot_partition.o 00:03:29.847 LINK spdk_nvme_discover 00:03:29.847 CC app/vhost/vhost.o 00:03:29.847 CC examples/bdev/hello_world/hello_bdev.o 00:03:29.847 CXX test/cpp_headers/idxd.o 00:03:29.847 LINK connect_stress 00:03:29.847 CC examples/nvme/arbitration/arbitration.o 00:03:30.104 LINK boot_partition 00:03:30.104 CXX test/cpp_headers/idxd_spec.o 00:03:30.104 LINK vhost 00:03:30.104 CC app/spdk_dd/spdk_dd.o 00:03:30.104 LINK blobcli 00:03:30.104 LINK hello_bdev 00:03:30.104 CC test/nvme/compliance/nvme_compliance.o 00:03:30.104 CXX test/cpp_headers/init.o 00:03:30.104 CC examples/bdev/bdevperf/bdevperf.o 00:03:30.104 CXX test/cpp_headers/ioat.o 00:03:30.363 LINK arbitration 00:03:30.363 CXX test/cpp_headers/ioat_spec.o 00:03:30.363 CC examples/nvme/hotplug/hotplug.o 00:03:30.363 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.363 CC app/fio/nvme/fio_plugin.o 00:03:30.363 LINK nvme_compliance 00:03:30.363 CXX test/cpp_headers/iscsi_spec.o 00:03:30.363 LINK spdk_dd 00:03:30.622 LINK spdk_top 00:03:30.622 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.622 LINK cmb_copy 00:03:30.622 CXX test/cpp_headers/json.o 00:03:30.622 LINK hotplug 00:03:30.622 CC examples/nvme/abort/abort.o 00:03:30.622 LINK fused_ordering 00:03:30.622 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.622 CXX test/cpp_headers/jsonrpc.o 00:03:30.622 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.622 CXX test/cpp_headers/keyring.o 00:03:30.622 CC test/nvme/fdp/fdp.o 00:03:30.880 CXX test/cpp_headers/keyring_module.o 00:03:30.880 LINK spdk_nvme 00:03:30.880 LINK bdevperf 00:03:30.880 LINK pmr_persistence 00:03:30.880 CXX test/cpp_headers/likely.o 00:03:30.880 CC test/nvme/cuse/cuse.o 00:03:30.880 LINK doorbell_aers 00:03:30.880 CXX test/cpp_headers/log.o 00:03:30.880 LINK abort 00:03:30.880 CXX test/cpp_headers/lvol.o 00:03:30.880 CXX test/cpp_headers/md5.o 00:03:30.880 CXX test/cpp_headers/memory.o 00:03:30.880 CC app/fio/bdev/fio_plugin.o 00:03:31.138 CXX test/cpp_headers/mmio.o 00:03:31.138 CXX test/cpp_headers/nbd.o 00:03:31.138 LINK fdp 00:03:31.138 CXX test/cpp_headers/net.o 00:03:31.138 CXX test/cpp_headers/notify.o 00:03:31.138 CXX test/cpp_headers/nvme.o 00:03:31.138 CXX test/cpp_headers/nvme_intel.o 00:03:31.138 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.138 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.138 CXX test/cpp_headers/nvme_spec.o 00:03:31.138 CXX test/cpp_headers/nvme_zns.o 00:03:31.138 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.138 CC examples/nvmf/nvmf/nvmf.o 00:03:31.405 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.405 CXX test/cpp_headers/nvmf.o 00:03:31.405 CXX test/cpp_headers/nvmf_spec.o 00:03:31.405 CXX test/cpp_headers/nvmf_transport.o 00:03:31.405 CXX test/cpp_headers/opal.o 00:03:31.405 CXX test/cpp_headers/opal_spec.o 00:03:31.405 CXX test/cpp_headers/pci_ids.o 00:03:31.405 LINK spdk_bdev 00:03:31.405 CXX test/cpp_headers/pipe.o 00:03:31.405 CXX test/cpp_headers/queue.o 00:03:31.405 CXX test/cpp_headers/reduce.o 00:03:31.405 CXX test/cpp_headers/rpc.o 00:03:31.405 CXX test/cpp_headers/scheduler.o 00:03:31.405 CXX test/cpp_headers/scsi.o 00:03:31.683 LINK nvmf 00:03:31.683 CXX test/cpp_headers/scsi_spec.o 00:03:31.683 CXX test/cpp_headers/sock.o 00:03:31.683 CXX test/cpp_headers/stdinc.o 00:03:31.683 CXX test/cpp_headers/string.o 00:03:31.683 CXX test/cpp_headers/thread.o 00:03:31.683 CXX test/cpp_headers/trace.o 00:03:31.683 CXX test/cpp_headers/trace_parser.o 00:03:31.683 CXX test/cpp_headers/tree.o 00:03:31.683 CXX test/cpp_headers/ublk.o 00:03:31.683 CXX test/cpp_headers/util.o 00:03:31.683 CXX test/cpp_headers/uuid.o 00:03:31.683 CXX test/cpp_headers/version.o 00:03:31.683 CXX test/cpp_headers/vfio_user_pci.o 00:03:31.683 CXX test/cpp_headers/vfio_user_spec.o 00:03:31.683 CXX test/cpp_headers/vhost.o 00:03:31.683 CXX test/cpp_headers/vmd.o 00:03:31.683 CXX test/cpp_headers/xor.o 00:03:31.683 CXX test/cpp_headers/zipf.o 00:03:31.942 LINK cuse 00:03:32.880 LINK esnap 00:03:33.141 00:03:33.141 real 1m2.211s 00:03:33.141 user 5m57.675s 00:03:33.141 sys 1m1.579s 00:03:33.141 07:35:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.141 07:35:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.141 ************************************ 00:03:33.141 END TEST make 00:03:33.141 ************************************ 00:03:33.141 07:35:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.141 07:35:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.141 07:35:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.141 07:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.141 07:35:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.141 07:35:22 -- pm/common@44 -- $ pid=5082 00:03:33.141 07:35:22 -- pm/common@50 -- $ kill -TERM 5082 00:03:33.141 07:35:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.141 07:35:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.141 07:35:22 -- pm/common@44 -- $ pid=5083 00:03:33.141 07:35:22 -- pm/common@50 -- $ kill -TERM 5083 00:03:33.141 07:35:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.141 07:35:22 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.141 07:35:22 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.141 07:35:22 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.142 07:35:22 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.142 07:35:23 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.142 07:35:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.142 07:35:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.142 07:35:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.142 07:35:23 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.142 07:35:23 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.142 07:35:23 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.142 07:35:23 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.142 07:35:23 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.142 07:35:23 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.142 07:35:23 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.142 07:35:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.142 07:35:23 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.142 07:35:23 -- scripts/common.sh@345 -- # : 1 00:03:33.142 07:35:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.142 07:35:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.142 07:35:23 -- scripts/common.sh@365 -- # decimal 1 00:03:33.142 07:35:23 -- scripts/common.sh@353 -- # local d=1 00:03:33.142 07:35:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.142 07:35:23 -- scripts/common.sh@355 -- # echo 1 00:03:33.142 07:35:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.142 07:35:23 -- scripts/common.sh@366 -- # decimal 2 00:03:33.142 07:35:23 -- scripts/common.sh@353 -- # local d=2 00:03:33.142 07:35:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.142 07:35:23 -- scripts/common.sh@355 -- # echo 2 00:03:33.142 07:35:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.142 07:35:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.142 07:35:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.142 07:35:23 -- scripts/common.sh@368 -- # return 0 00:03:33.142 07:35:23 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.142 07:35:23 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.142 --rc genhtml_branch_coverage=1 00:03:33.142 --rc genhtml_function_coverage=1 00:03:33.142 --rc genhtml_legend=1 00:03:33.142 --rc geninfo_all_blocks=1 00:03:33.142 --rc geninfo_unexecuted_blocks=1 00:03:33.142 00:03:33.142 ' 00:03:33.142 07:35:23 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.142 --rc genhtml_branch_coverage=1 00:03:33.142 --rc genhtml_function_coverage=1 00:03:33.142 --rc genhtml_legend=1 00:03:33.142 --rc geninfo_all_blocks=1 00:03:33.142 --rc geninfo_unexecuted_blocks=1 00:03:33.142 00:03:33.142 ' 00:03:33.142 07:35:23 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.142 --rc genhtml_branch_coverage=1 00:03:33.142 --rc genhtml_function_coverage=1 00:03:33.142 --rc genhtml_legend=1 00:03:33.142 --rc geninfo_all_blocks=1 00:03:33.142 --rc geninfo_unexecuted_blocks=1 00:03:33.142 00:03:33.142 ' 00:03:33.142 07:35:23 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.142 --rc genhtml_branch_coverage=1 00:03:33.142 --rc genhtml_function_coverage=1 00:03:33.142 --rc genhtml_legend=1 00:03:33.142 --rc geninfo_all_blocks=1 00:03:33.142 --rc geninfo_unexecuted_blocks=1 00:03:33.142 00:03:33.142 ' 00:03:33.142 07:35:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.142 07:35:23 -- nvmf/common.sh@7 -- # uname -s 00:03:33.142 07:35:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.142 07:35:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.142 07:35:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.142 07:35:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.142 07:35:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.142 07:35:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.142 07:35:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.142 07:35:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.142 07:35:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.142 07:35:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.142 07:35:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:03:33.142 07:35:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:03:33.142 07:35:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.142 07:35:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.142 07:35:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.142 07:35:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.142 07:35:23 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.142 07:35:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.142 07:35:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.142 07:35:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.142 07:35:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.142 07:35:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.142 07:35:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.142 07:35:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.142 07:35:23 -- paths/export.sh@5 -- # export PATH 00:03:33.142 07:35:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.142 07:35:23 -- nvmf/common.sh@51 -- # : 0 00:03:33.142 07:35:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.142 07:35:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.142 07:35:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.142 07:35:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.142 07:35:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.142 07:35:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.142 07:35:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.142 07:35:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.142 07:35:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.142 07:35:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.403 07:35:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.403 07:35:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.403 07:35:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.403 07:35:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.403 07:35:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.403 07:35:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.403 07:35:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.403 07:35:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.403 07:35:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.403 07:35:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54198 00:03:33.403 07:35:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.403 07:35:23 -- pm/common@17 -- # local monitor 00:03:33.403 07:35:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.403 07:35:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.403 07:35:23 -- pm/common@25 -- # sleep 1 00:03:33.403 07:35:23 -- pm/common@21 -- # date +%s 00:03:33.403 07:35:23 -- pm/common@21 -- # date +%s 00:03:33.403 07:35:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.403 07:35:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732865723 00:03:33.403 07:35:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732865723 00:03:33.403 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732865723_collect-cpu-load.pm.log 00:03:33.403 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732865723_collect-vmstat.pm.log 00:03:34.340 07:35:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.340 07:35:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.340 07:35:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.340 07:35:24 -- common/autotest_common.sh@10 -- # set +x 00:03:34.340 07:35:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.340 07:35:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:34.340 07:35:24 -- common/autotest_common.sh@10 -- # set +x 00:03:34.340 07:35:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.340 07:35:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.340 07:35:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.340 07:35:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.340 07:35:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.340 07:35:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.340 07:35:24 -- common/autotest_common.sh@1457 -- # uname 00:03:34.340 07:35:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:34.340 07:35:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.340 07:35:24 -- common/autotest_common.sh@1477 -- # uname 00:03:34.340 07:35:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:34.340 07:35:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.340 07:35:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:34.340 lcov: LCOV version 1.15 00:03:34.340 07:35:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.548 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.548 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.440 07:35:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:01.440 07:35:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.440 07:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.440 07:35:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:01.440 07:35:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.440 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:01.440 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:01.440 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:01.440 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:01.440 07:35:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:01.440 07:35:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:01.440 07:35:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:01.440 07:35:50 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:01.440 07:35:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:01.440 07:35:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:01.440 07:35:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273338 s, 38.4 MB/s 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631946 s, 166 MB/s 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631888 s, 166 MB/s 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476542 s, 220 MB/s 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531359 s, 197 MB/s 00:04:01.440 07:35:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.440 07:35:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.440 07:35:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:01.440 07:35:50 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:01.440 07:35:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:01.440 No valid GPT data, bailing 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:01.440 07:35:50 -- scripts/common.sh@394 -- # pt= 00:04:01.440 07:35:50 -- scripts/common.sh@395 -- # return 1 00:04:01.440 07:35:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:01.440 1+0 records in 00:04:01.440 1+0 records out 00:04:01.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00570362 s, 184 MB/s 00:04:01.440 07:35:51 -- spdk/autotest.sh@105 -- # sync 00:04:01.440 07:35:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.440 07:35:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.440 07:35:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.360 07:35:53 -- spdk/autotest.sh@111 -- # uname -s 00:04:03.360 07:35:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:03.360 07:35:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:03.360 07:35:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.194 Hugepages 00:04:04.194 node hugesize free / total 00:04:04.194 node0 1048576kB 0 / 0 00:04:04.194 node0 2048kB 0 / 0 00:04:04.194 00:04:04.194 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.194 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:04.194 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.454 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:04.454 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:04.454 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:04.454 07:35:54 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.454 07:35:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.454 07:35:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.454 07:35:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.610 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.610 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.610 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.610 07:35:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.556 07:35:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.556 07:35:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.556 07:35:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.556 07:35:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.556 07:35:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.556 07:35:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.556 07:35:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.556 07:35:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.556 07:35:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.817 07:35:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:06.817 07:35:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:06.817 07:35:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.078 Waiting for block devices as requested 00:04:07.078 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.340 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.340 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.340 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.628 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:12.628 07:36:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.628 07:36:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.628 07:36:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.628 07:36:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.628 07:36:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1543 -- # continue 00:04:12.628 07:36:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.628 07:36:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.628 07:36:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.628 07:36:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.628 07:36:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.628 07:36:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.628 07:36:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.629 07:36:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1543 -- # continue 00:04:12.629 07:36:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.629 07:36:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.629 07:36:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.629 07:36:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.629 07:36:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1543 -- # continue 00:04:12.629 07:36:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.629 07:36:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.629 07:36:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.629 07:36:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.629 07:36:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.629 07:36:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.629 07:36:02 -- common/autotest_common.sh@1543 -- # continue 00:04:12.629 07:36:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:12.629 07:36:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.629 07:36:02 -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 07:36:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:12.629 07:36:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.629 07:36:02 -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 07:36:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.773 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.773 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.773 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.773 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.773 07:36:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.773 07:36:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.773 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 07:36:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.773 07:36:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:13.773 07:36:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.773 07:36:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:13.773 07:36:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:13.773 07:36:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:13.773 07:36:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.773 07:36:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:13.773 07:36:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.773 07:36:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.773 07:36:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.773 07:36:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.773 07:36:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:14.035 07:36:03 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:14.035 07:36:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:14.035 07:36:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.035 07:36:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.035 07:36:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.035 07:36:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.035 07:36:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.035 07:36:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.035 07:36:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:14.035 07:36:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:14.035 07:36:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:14.035 07:36:03 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:14.035 07:36:03 -- common/autotest_common.sh@1572 -- # return 0 00:04:14.035 07:36:03 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:14.035 07:36:03 -- common/autotest_common.sh@1580 -- # return 0 00:04:14.035 07:36:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:14.035 07:36:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:14.035 07:36:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.035 07:36:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.035 07:36:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:14.035 07:36:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.035 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 07:36:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:14.035 07:36:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:14.035 07:36:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.035 07:36:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.035 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 START TEST env 00:04:14.036 ************************************ 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:14.036 * Looking for test storage... 00:04:14.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.036 07:36:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.036 07:36:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.036 07:36:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.036 07:36:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.036 07:36:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.036 07:36:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.036 07:36:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.036 07:36:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.036 07:36:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.036 07:36:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.036 07:36:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.036 07:36:03 env -- scripts/common.sh@344 -- # case "$op" in 00:04:14.036 07:36:03 env -- scripts/common.sh@345 -- # : 1 00:04:14.036 07:36:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.036 07:36:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.036 07:36:03 env -- scripts/common.sh@365 -- # decimal 1 00:04:14.036 07:36:03 env -- scripts/common.sh@353 -- # local d=1 00:04:14.036 07:36:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.036 07:36:03 env -- scripts/common.sh@355 -- # echo 1 00:04:14.036 07:36:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.036 07:36:03 env -- scripts/common.sh@366 -- # decimal 2 00:04:14.036 07:36:03 env -- scripts/common.sh@353 -- # local d=2 00:04:14.036 07:36:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.036 07:36:03 env -- scripts/common.sh@355 -- # echo 2 00:04:14.036 07:36:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.036 07:36:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.036 07:36:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.036 07:36:03 env -- scripts/common.sh@368 -- # return 0 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.036 --rc genhtml_branch_coverage=1 00:04:14.036 --rc genhtml_function_coverage=1 00:04:14.036 --rc genhtml_legend=1 00:04:14.036 --rc geninfo_all_blocks=1 00:04:14.036 --rc geninfo_unexecuted_blocks=1 00:04:14.036 00:04:14.036 ' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.036 --rc genhtml_branch_coverage=1 00:04:14.036 --rc genhtml_function_coverage=1 00:04:14.036 --rc genhtml_legend=1 00:04:14.036 --rc geninfo_all_blocks=1 00:04:14.036 --rc geninfo_unexecuted_blocks=1 00:04:14.036 00:04:14.036 ' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.036 --rc genhtml_branch_coverage=1 00:04:14.036 --rc genhtml_function_coverage=1 00:04:14.036 --rc genhtml_legend=1 00:04:14.036 --rc geninfo_all_blocks=1 00:04:14.036 --rc geninfo_unexecuted_blocks=1 00:04:14.036 00:04:14.036 ' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.036 --rc genhtml_branch_coverage=1 00:04:14.036 --rc genhtml_function_coverage=1 00:04:14.036 --rc genhtml_legend=1 00:04:14.036 --rc geninfo_all_blocks=1 00:04:14.036 --rc geninfo_unexecuted_blocks=1 00:04:14.036 00:04:14.036 ' 00:04:14.036 07:36:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.036 07:36:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.036 07:36:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 ************************************ 00:04:14.297 START TEST env_memory 00:04:14.297 ************************************ 00:04:14.297 07:36:03 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:14.297 00:04:14.297 00:04:14.297 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.297 http://cunit.sourceforge.net/ 00:04:14.297 00:04:14.297 00:04:14.297 Suite: memory 00:04:14.297 Test: alloc and free memory map ...[2024-11-29 07:36:04.041381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:14.297 passed 00:04:14.297 Test: mem map translation ...[2024-11-29 07:36:04.080583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:14.297 [2024-11-29 07:36:04.080724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:14.297 [2024-11-29 07:36:04.080841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:14.297 [2024-11-29 07:36:04.080882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:14.297 passed 00:04:14.297 Test: mem map registration ...[2024-11-29 07:36:04.149269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:14.297 [2024-11-29 07:36:04.149404] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:14.297 passed 00:04:14.558 Test: mem map adjacent registrations ...passed 00:04:14.558 00:04:14.558 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.558 suites 1 1 n/a 0 0 00:04:14.558 tests 4 4 4 0 0 00:04:14.558 asserts 152 152 152 0 n/a 00:04:14.558 00:04:14.558 Elapsed time = 0.233 seconds 00:04:14.558 00:04:14.558 real 0m0.273s 00:04:14.558 user 0m0.236s 00:04:14.558 sys 0m0.027s 00:04:14.558 07:36:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.558 07:36:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.558 ************************************ 00:04:14.558 END TEST env_memory 00:04:14.558 ************************************ 00:04:14.558 07:36:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.558 07:36:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.558 07:36:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.558 07:36:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.558 ************************************ 00:04:14.558 START TEST env_vtophys 00:04:14.558 ************************************ 00:04:14.558 07:36:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.558 EAL: lib.eal log level changed from notice to debug 00:04:14.558 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 1 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 2 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 3 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 4 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 5 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 6 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 7 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 8 as core 0 on socket 0 00:04:14.559 EAL: Detected lcore 9 as core 0 on socket 0 00:04:14.559 EAL: Maximum logical cores by configuration: 128 00:04:14.559 EAL: Detected CPU lcores: 10 00:04:14.559 EAL: Detected NUMA nodes: 1 00:04:14.559 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.559 EAL: Detected shared linkage of DPDK 00:04:14.559 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.559 EAL: Selected IOVA mode 'PA' 00:04:14.559 EAL: Probing VFIO support... 00:04:14.559 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.559 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:14.559 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.559 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.559 EAL: Setting up physically contiguous memory... 00:04:14.559 EAL: Setting maximum number of open files to 524288 00:04:14.559 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.559 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.559 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.559 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.559 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.559 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.559 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.559 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.559 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.559 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.559 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.559 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.559 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.559 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.559 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.559 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.559 EAL: Hugepages will be freed exactly as allocated. 00:04:14.559 EAL: No shared files mode enabled, IPC is disabled 00:04:14.559 EAL: No shared files mode enabled, IPC is disabled 00:04:14.559 EAL: TSC frequency is ~2600000 KHz 00:04:14.559 EAL: Main lcore 0 is ready (tid=7fd71a8cca40;cpuset=[0]) 00:04:14.559 EAL: Trying to obtain current memory policy. 00:04:14.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.559 EAL: Restoring previous memory policy: 0 00:04:14.559 EAL: request: mp_malloc_sync 00:04:14.559 EAL: No shared files mode enabled, IPC is disabled 00:04:14.559 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.559 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.559 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.559 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.559 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:14.820 00:04:14.820 00:04:14.820 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.820 http://cunit.sourceforge.net/ 00:04:14.820 00:04:14.820 00:04:14.820 Suite: components_suite 00:04:15.081 Test: vtophys_malloc_test ...passed 00:04:15.081 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.081 EAL: Restoring previous memory policy: 4 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.081 EAL: request: mp_malloc_sync 00:04:15.081 EAL: No shared files mode enabled, IPC is disabled 00:04:15.081 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.081 EAL: request: mp_malloc_sync 00:04:15.081 EAL: No shared files mode enabled, IPC is disabled 00:04:15.081 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.081 EAL: Trying to obtain current memory policy. 00:04:15.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.081 EAL: Restoring previous memory policy: 4 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.081 EAL: request: mp_malloc_sync 00:04:15.081 EAL: No shared files mode enabled, IPC is disabled 00:04:15.081 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.081 EAL: request: mp_malloc_sync 00:04:15.081 EAL: No shared files mode enabled, IPC is disabled 00:04:15.081 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.081 EAL: Trying to obtain current memory policy. 00:04:15.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.081 EAL: Restoring previous memory policy: 4 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.081 EAL: request: mp_malloc_sync 00:04:15.081 EAL: No shared files mode enabled, IPC is disabled 00:04:15.081 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.082 EAL: request: mp_malloc_sync 00:04:15.082 EAL: No shared files mode enabled, IPC is disabled 00:04:15.082 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.082 EAL: Trying to obtain current memory policy. 00:04:15.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.082 EAL: Restoring previous memory policy: 4 00:04:15.082 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.082 EAL: request: mp_malloc_sync 00:04:15.082 EAL: No shared files mode enabled, IPC is disabled 00:04:15.082 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.082 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.082 EAL: request: mp_malloc_sync 00:04:15.082 EAL: No shared files mode enabled, IPC is disabled 00:04:15.082 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.082 EAL: Trying to obtain current memory policy. 00:04:15.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.082 EAL: Restoring previous memory policy: 4 00:04:15.082 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.082 EAL: request: mp_malloc_sync 00:04:15.082 EAL: No shared files mode enabled, IPC is disabled 00:04:15.082 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.082 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.082 EAL: request: mp_malloc_sync 00:04:15.082 EAL: No shared files mode enabled, IPC is disabled 00:04:15.082 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.347 EAL: Trying to obtain current memory policy. 00:04:15.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.347 EAL: Restoring previous memory policy: 4 00:04:15.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.347 EAL: request: mp_malloc_sync 00:04:15.347 EAL: No shared files mode enabled, IPC is disabled 00:04:15.347 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.347 EAL: request: mp_malloc_sync 00:04:15.347 EAL: No shared files mode enabled, IPC is disabled 00:04:15.347 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.347 EAL: Trying to obtain current memory policy. 00:04:15.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.347 EAL: Restoring previous memory policy: 4 00:04:15.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.347 EAL: request: mp_malloc_sync 00:04:15.347 EAL: No shared files mode enabled, IPC is disabled 00:04:15.347 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.606 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.606 EAL: request: mp_malloc_sync 00:04:15.606 EAL: No shared files mode enabled, IPC is disabled 00:04:15.606 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.606 EAL: Trying to obtain current memory policy. 00:04:15.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.866 EAL: Restoring previous memory policy: 4 00:04:15.866 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.866 EAL: request: mp_malloc_sync 00:04:15.866 EAL: No shared files mode enabled, IPC is disabled 00:04:15.866 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.124 EAL: request: mp_malloc_sync 00:04:16.124 EAL: No shared files mode enabled, IPC is disabled 00:04:16.124 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.383 EAL: Trying to obtain current memory policy. 00:04:16.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.383 EAL: Restoring previous memory policy: 4 00:04:16.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.383 EAL: request: mp_malloc_sync 00:04:16.384 EAL: No shared files mode enabled, IPC is disabled 00:04:16.384 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.950 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.950 EAL: request: mp_malloc_sync 00:04:16.950 EAL: No shared files mode enabled, IPC is disabled 00:04:16.950 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.210 EAL: Trying to obtain current memory policy. 00:04:17.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.469 EAL: Restoring previous memory policy: 4 00:04:17.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.469 EAL: request: mp_malloc_sync 00:04:17.469 EAL: No shared files mode enabled, IPC is disabled 00:04:17.469 EAL: Heap on socket 0 was expanded by 1026MB 00:04:18.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.404 EAL: request: mp_malloc_sync 00:04:18.404 EAL: No shared files mode enabled, IPC is disabled 00:04:18.404 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.339 passed 00:04:19.339 00:04:19.339 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.339 suites 1 1 n/a 0 0 00:04:19.339 tests 2 2 2 0 0 00:04:19.339 asserts 5761 5761 5761 0 n/a 00:04:19.339 00:04:19.339 Elapsed time = 4.376 seconds 00:04:19.339 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.339 EAL: request: mp_malloc_sync 00:04:19.339 EAL: No shared files mode enabled, IPC is disabled 00:04:19.339 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.339 EAL: No shared files mode enabled, IPC is disabled 00:04:19.339 EAL: No shared files mode enabled, IPC is disabled 00:04:19.339 EAL: No shared files mode enabled, IPC is disabled 00:04:19.339 00:04:19.339 real 0m4.652s 00:04:19.339 user 0m3.802s 00:04:19.339 sys 0m0.701s 00:04:19.339 07:36:08 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.339 07:36:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.339 ************************************ 00:04:19.339 END TEST env_vtophys 00:04:19.339 ************************************ 00:04:19.339 07:36:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.339 07:36:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.339 07:36:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.339 07:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.339 ************************************ 00:04:19.339 START TEST env_pci 00:04:19.339 ************************************ 00:04:19.339 07:36:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.339 00:04:19.339 00:04:19.340 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.340 http://cunit.sourceforge.net/ 00:04:19.340 00:04:19.340 00:04:19.340 Suite: pci 00:04:19.340 Test: pci_hook ...[2024-11-29 07:36:09.044816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56933 has claimed it 00:04:19.340 EAL: Cannot find device (10000:00:01.0) 00:04:19.340 passed 00:04:19.340 00:04:19.340 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.340 suites 1 1 n/a 0 0 00:04:19.340 tests 1 1 1 0 0 00:04:19.340 asserts 25 25 25 0 n/a 00:04:19.340 00:04:19.340 Elapsed time = 0.005 seconds 00:04:19.340 EAL: Failed to attach device on primary process 00:04:19.340 00:04:19.340 real 0m0.053s 00:04:19.340 user 0m0.020s 00:04:19.340 sys 0m0.032s 00:04:19.340 ************************************ 00:04:19.340 END TEST env_pci 00:04:19.340 ************************************ 00:04:19.340 07:36:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.340 07:36:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.340 07:36:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.340 07:36:09 env -- env/env.sh@15 -- # uname 00:04:19.340 07:36:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.340 07:36:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.340 07:36:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.340 07:36:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:19.340 07:36:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.340 07:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.340 ************************************ 00:04:19.340 START TEST env_dpdk_post_init 00:04:19.340 ************************************ 00:04:19.340 07:36:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.340 EAL: Detected CPU lcores: 10 00:04:19.340 EAL: Detected NUMA nodes: 1 00:04:19.340 EAL: Detected shared linkage of DPDK 00:04:19.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.340 EAL: Selected IOVA mode 'PA' 00:04:19.632 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:19.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:19.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:19.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:19.632 Starting DPDK initialization... 00:04:19.632 Starting SPDK post initialization... 00:04:19.632 SPDK NVMe probe 00:04:19.632 Attaching to 0000:00:10.0 00:04:19.632 Attaching to 0000:00:11.0 00:04:19.632 Attaching to 0000:00:12.0 00:04:19.632 Attaching to 0000:00:13.0 00:04:19.632 Attached to 0000:00:10.0 00:04:19.632 Attached to 0000:00:11.0 00:04:19.632 Attached to 0000:00:13.0 00:04:19.632 Attached to 0000:00:12.0 00:04:19.632 Cleaning up... 00:04:19.632 ************************************ 00:04:19.632 END TEST env_dpdk_post_init 00:04:19.632 ************************************ 00:04:19.632 00:04:19.632 real 0m0.238s 00:04:19.632 user 0m0.069s 00:04:19.632 sys 0m0.071s 00:04:19.632 07:36:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.632 07:36:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.632 07:36:09 env -- env/env.sh@26 -- # uname 00:04:19.632 07:36:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:19.632 07:36:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.632 07:36:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.632 07:36:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.632 07:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.632 ************************************ 00:04:19.632 START TEST env_mem_callbacks 00:04:19.632 ************************************ 00:04:19.632 07:36:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.632 EAL: Detected CPU lcores: 10 00:04:19.632 EAL: Detected NUMA nodes: 1 00:04:19.632 EAL: Detected shared linkage of DPDK 00:04:19.632 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.632 EAL: Selected IOVA mode 'PA' 00:04:19.894 00:04:19.894 00:04:19.895 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.895 http://cunit.sourceforge.net/ 00:04:19.895 00:04:19.895 00:04:19.895 Suite: memory 00:04:19.895 Test: test ... 00:04:19.895 register 0x200000200000 2097152 00:04:19.895 malloc 3145728 00:04:19.895 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.895 register 0x200000400000 4194304 00:04:19.895 buf 0x2000004fffc0 len 3145728 PASSED 00:04:19.895 malloc 64 00:04:19.895 buf 0x2000004ffec0 len 64 PASSED 00:04:19.895 malloc 4194304 00:04:19.895 register 0x200000800000 6291456 00:04:19.895 buf 0x2000009fffc0 len 4194304 PASSED 00:04:19.895 free 0x2000004fffc0 3145728 00:04:19.895 free 0x2000004ffec0 64 00:04:19.895 unregister 0x200000400000 4194304 PASSED 00:04:19.895 free 0x2000009fffc0 4194304 00:04:19.895 unregister 0x200000800000 6291456 PASSED 00:04:19.895 malloc 8388608 00:04:19.895 register 0x200000400000 10485760 00:04:19.895 buf 0x2000005fffc0 len 8388608 PASSED 00:04:19.895 free 0x2000005fffc0 8388608 00:04:19.895 unregister 0x200000400000 10485760 PASSED 00:04:19.895 passed 00:04:19.895 00:04:19.895 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.895 suites 1 1 n/a 0 0 00:04:19.895 tests 1 1 1 0 0 00:04:19.895 asserts 15 15 15 0 n/a 00:04:19.895 00:04:19.895 Elapsed time = 0.040 seconds 00:04:19.895 00:04:19.895 real 0m0.206s 00:04:19.895 user 0m0.056s 00:04:19.895 sys 0m0.047s 00:04:19.895 07:36:09 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.895 ************************************ 00:04:19.895 END TEST env_mem_callbacks 00:04:19.895 ************************************ 00:04:19.895 07:36:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:19.895 ************************************ 00:04:19.895 END TEST env 00:04:19.895 ************************************ 00:04:19.895 00:04:19.895 real 0m5.858s 00:04:19.895 user 0m4.345s 00:04:19.895 sys 0m1.088s 00:04:19.895 07:36:09 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.895 07:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.895 07:36:09 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.895 07:36:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.895 07:36:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.895 07:36:09 -- common/autotest_common.sh@10 -- # set +x 00:04:19.895 ************************************ 00:04:19.895 START TEST rpc 00:04:19.895 ************************************ 00:04:19.895 07:36:09 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.895 * Looking for test storage... 00:04:19.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.895 07:36:09 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.895 07:36:09 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.895 07:36:09 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.156 07:36:09 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.156 07:36:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.156 07:36:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.156 07:36:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.156 07:36:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.156 07:36:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.156 07:36:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.156 07:36:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.156 07:36:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.156 07:36:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.156 07:36:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.156 07:36:09 rpc -- scripts/common.sh@345 -- # : 1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.156 07:36:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.156 07:36:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.156 07:36:09 rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.156 07:36:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.157 07:36:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.157 07:36:09 rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.157 07:36:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.157 07:36:09 rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.157 07:36:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.157 07:36:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.157 07:36:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.157 07:36:09 rpc -- scripts/common.sh@368 -- # return 0 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.157 --rc genhtml_branch_coverage=1 00:04:20.157 --rc genhtml_function_coverage=1 00:04:20.157 --rc genhtml_legend=1 00:04:20.157 --rc geninfo_all_blocks=1 00:04:20.157 --rc geninfo_unexecuted_blocks=1 00:04:20.157 00:04:20.157 ' 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.157 --rc genhtml_branch_coverage=1 00:04:20.157 --rc genhtml_function_coverage=1 00:04:20.157 --rc genhtml_legend=1 00:04:20.157 --rc geninfo_all_blocks=1 00:04:20.157 --rc geninfo_unexecuted_blocks=1 00:04:20.157 00:04:20.157 ' 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.157 --rc genhtml_branch_coverage=1 00:04:20.157 --rc genhtml_function_coverage=1 00:04:20.157 --rc genhtml_legend=1 00:04:20.157 --rc geninfo_all_blocks=1 00:04:20.157 --rc geninfo_unexecuted_blocks=1 00:04:20.157 00:04:20.157 ' 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.157 --rc genhtml_branch_coverage=1 00:04:20.157 --rc genhtml_function_coverage=1 00:04:20.157 --rc genhtml_legend=1 00:04:20.157 --rc geninfo_all_blocks=1 00:04:20.157 --rc geninfo_unexecuted_blocks=1 00:04:20.157 00:04:20.157 ' 00:04:20.157 07:36:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57060 00:04:20.157 07:36:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.157 07:36:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57060 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@835 -- # '[' -z 57060 ']' 00:04:20.157 07:36:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.157 07:36:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.157 [2024-11-29 07:36:09.952434] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:20.157 [2024-11-29 07:36:09.952720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:04:20.417 [2024-11-29 07:36:10.113952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.417 [2024-11-29 07:36:10.208651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:20.417 [2024-11-29 07:36:10.208692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57060' to capture a snapshot of events at runtime. 00:04:20.417 [2024-11-29 07:36:10.208702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:20.417 [2024-11-29 07:36:10.208712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:20.417 [2024-11-29 07:36:10.208719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57060 for offline analysis/debug. 00:04:20.417 [2024-11-29 07:36:10.209556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.987 07:36:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.987 07:36:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:20.987 07:36:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.987 07:36:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.987 07:36:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.987 07:36:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.987 07:36:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.987 07:36:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.987 07:36:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.987 ************************************ 00:04:20.987 START TEST rpc_integrity 00:04:20.987 ************************************ 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.987 { 00:04:20.987 "name": "Malloc0", 00:04:20.987 "aliases": [ 00:04:20.987 "ed93b83a-5e2e-4432-9d4b-b8dd125019aa" 00:04:20.987 ], 00:04:20.987 "product_name": "Malloc disk", 00:04:20.987 "block_size": 512, 00:04:20.987 "num_blocks": 16384, 00:04:20.987 "uuid": "ed93b83a-5e2e-4432-9d4b-b8dd125019aa", 00:04:20.987 "assigned_rate_limits": { 00:04:20.987 "rw_ios_per_sec": 0, 00:04:20.987 "rw_mbytes_per_sec": 0, 00:04:20.987 "r_mbytes_per_sec": 0, 00:04:20.987 "w_mbytes_per_sec": 0 00:04:20.987 }, 00:04:20.987 "claimed": false, 00:04:20.987 "zoned": false, 00:04:20.987 "supported_io_types": { 00:04:20.987 "read": true, 00:04:20.987 "write": true, 00:04:20.987 "unmap": true, 00:04:20.987 "flush": true, 00:04:20.987 "reset": true, 00:04:20.987 "nvme_admin": false, 00:04:20.987 "nvme_io": false, 00:04:20.987 "nvme_io_md": false, 00:04:20.987 "write_zeroes": true, 00:04:20.987 "zcopy": true, 00:04:20.987 "get_zone_info": false, 00:04:20.987 "zone_management": false, 00:04:20.987 "zone_append": false, 00:04:20.987 "compare": false, 00:04:20.987 "compare_and_write": false, 00:04:20.987 "abort": true, 00:04:20.987 "seek_hole": false, 00:04:20.987 "seek_data": false, 00:04:20.987 "copy": true, 00:04:20.987 "nvme_iov_md": false 00:04:20.987 }, 00:04:20.987 "memory_domains": [ 00:04:20.987 { 00:04:20.987 "dma_device_id": "system", 00:04:20.987 "dma_device_type": 1 00:04:20.987 }, 00:04:20.987 { 00:04:20.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.987 "dma_device_type": 2 00:04:20.987 } 00:04:20.987 ], 00:04:20.987 "driver_specific": {} 00:04:20.987 } 00:04:20.987 ]' 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.987 [2024-11-29 07:36:10.913382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.987 [2024-11-29 07:36:10.913436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.987 [2024-11-29 07:36:10.913468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:20.987 [2024-11-29 07:36:10.913479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.987 [2024-11-29 07:36:10.915609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.987 [2024-11-29 07:36:10.915649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.987 Passthru0 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.987 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.987 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.248 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.248 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.248 { 00:04:21.248 "name": "Malloc0", 00:04:21.248 "aliases": [ 00:04:21.248 "ed93b83a-5e2e-4432-9d4b-b8dd125019aa" 00:04:21.248 ], 00:04:21.248 "product_name": "Malloc disk", 00:04:21.248 "block_size": 512, 00:04:21.248 "num_blocks": 16384, 00:04:21.248 "uuid": "ed93b83a-5e2e-4432-9d4b-b8dd125019aa", 00:04:21.248 "assigned_rate_limits": { 00:04:21.248 "rw_ios_per_sec": 0, 00:04:21.248 "rw_mbytes_per_sec": 0, 00:04:21.248 "r_mbytes_per_sec": 0, 00:04:21.248 "w_mbytes_per_sec": 0 00:04:21.248 }, 00:04:21.248 "claimed": true, 00:04:21.248 "claim_type": "exclusive_write", 00:04:21.248 "zoned": false, 00:04:21.248 "supported_io_types": { 00:04:21.248 "read": true, 00:04:21.248 "write": true, 00:04:21.248 "unmap": true, 00:04:21.248 "flush": true, 00:04:21.248 "reset": true, 00:04:21.248 "nvme_admin": false, 00:04:21.248 "nvme_io": false, 00:04:21.248 "nvme_io_md": false, 00:04:21.248 "write_zeroes": true, 00:04:21.248 "zcopy": true, 00:04:21.248 "get_zone_info": false, 00:04:21.248 "zone_management": false, 00:04:21.248 "zone_append": false, 00:04:21.248 "compare": false, 00:04:21.248 "compare_and_write": false, 00:04:21.248 "abort": true, 00:04:21.248 "seek_hole": false, 00:04:21.248 "seek_data": false, 00:04:21.248 "copy": true, 00:04:21.248 "nvme_iov_md": false 00:04:21.248 }, 00:04:21.248 "memory_domains": [ 00:04:21.248 { 00:04:21.248 "dma_device_id": "system", 00:04:21.248 "dma_device_type": 1 00:04:21.248 }, 00:04:21.248 { 00:04:21.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.248 "dma_device_type": 2 00:04:21.248 } 00:04:21.248 ], 00:04:21.248 "driver_specific": {} 00:04:21.248 }, 00:04:21.248 { 00:04:21.248 "name": "Passthru0", 00:04:21.249 "aliases": [ 00:04:21.249 "f6a9e674-56ad-5f27-a2a0-c56c2bada7a4" 00:04:21.249 ], 00:04:21.249 "product_name": "passthru", 00:04:21.249 "block_size": 512, 00:04:21.249 "num_blocks": 16384, 00:04:21.249 "uuid": "f6a9e674-56ad-5f27-a2a0-c56c2bada7a4", 00:04:21.249 "assigned_rate_limits": { 00:04:21.249 "rw_ios_per_sec": 0, 00:04:21.249 "rw_mbytes_per_sec": 0, 00:04:21.249 "r_mbytes_per_sec": 0, 00:04:21.249 "w_mbytes_per_sec": 0 00:04:21.249 }, 00:04:21.249 "claimed": false, 00:04:21.249 "zoned": false, 00:04:21.249 "supported_io_types": { 00:04:21.249 "read": true, 00:04:21.249 "write": true, 00:04:21.249 "unmap": true, 00:04:21.249 "flush": true, 00:04:21.249 "reset": true, 00:04:21.249 "nvme_admin": false, 00:04:21.249 "nvme_io": false, 00:04:21.249 "nvme_io_md": false, 00:04:21.249 "write_zeroes": true, 00:04:21.249 "zcopy": true, 00:04:21.249 "get_zone_info": false, 00:04:21.249 "zone_management": false, 00:04:21.249 "zone_append": false, 00:04:21.249 "compare": false, 00:04:21.249 "compare_and_write": false, 00:04:21.249 "abort": true, 00:04:21.249 "seek_hole": false, 00:04:21.249 "seek_data": false, 00:04:21.249 "copy": true, 00:04:21.249 "nvme_iov_md": false 00:04:21.249 }, 00:04:21.249 "memory_domains": [ 00:04:21.249 { 00:04:21.249 "dma_device_id": "system", 00:04:21.249 "dma_device_type": 1 00:04:21.249 }, 00:04:21.249 { 00:04:21.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.249 "dma_device_type": 2 00:04:21.249 } 00:04:21.249 ], 00:04:21.249 "driver_specific": { 00:04:21.249 "passthru": { 00:04:21.249 "name": "Passthru0", 00:04:21.249 "base_bdev_name": "Malloc0" 00:04:21.249 } 00:04:21.249 } 00:04:21.249 } 00:04:21.249 ]' 00:04:21.249 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.249 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.249 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.249 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.249 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.249 07:36:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.249 07:36:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.249 00:04:21.249 real 0m0.243s 00:04:21.249 user 0m0.132s 00:04:21.249 sys 0m0.033s 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 ************************************ 00:04:21.249 END TEST rpc_integrity 00:04:21.249 ************************************ 00:04:21.249 07:36:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:21.249 07:36:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.249 07:36:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.249 07:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 ************************************ 00:04:21.249 START TEST rpc_plugins 00:04:21.249 ************************************ 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.249 { 00:04:21.249 "name": "Malloc1", 00:04:21.249 "aliases": [ 00:04:21.249 "6e290dad-77e6-4783-87d0-6e7bc3a3d112" 00:04:21.249 ], 00:04:21.249 "product_name": "Malloc disk", 00:04:21.249 "block_size": 4096, 00:04:21.249 "num_blocks": 256, 00:04:21.249 "uuid": "6e290dad-77e6-4783-87d0-6e7bc3a3d112", 00:04:21.249 "assigned_rate_limits": { 00:04:21.249 "rw_ios_per_sec": 0, 00:04:21.249 "rw_mbytes_per_sec": 0, 00:04:21.249 "r_mbytes_per_sec": 0, 00:04:21.249 "w_mbytes_per_sec": 0 00:04:21.249 }, 00:04:21.249 "claimed": false, 00:04:21.249 "zoned": false, 00:04:21.249 "supported_io_types": { 00:04:21.249 "read": true, 00:04:21.249 "write": true, 00:04:21.249 "unmap": true, 00:04:21.249 "flush": true, 00:04:21.249 "reset": true, 00:04:21.249 "nvme_admin": false, 00:04:21.249 "nvme_io": false, 00:04:21.249 "nvme_io_md": false, 00:04:21.249 "write_zeroes": true, 00:04:21.249 "zcopy": true, 00:04:21.249 "get_zone_info": false, 00:04:21.249 "zone_management": false, 00:04:21.249 "zone_append": false, 00:04:21.249 "compare": false, 00:04:21.249 "compare_and_write": false, 00:04:21.249 "abort": true, 00:04:21.249 "seek_hole": false, 00:04:21.249 "seek_data": false, 00:04:21.249 "copy": true, 00:04:21.249 "nvme_iov_md": false 00:04:21.249 }, 00:04:21.249 "memory_domains": [ 00:04:21.249 { 00:04:21.249 "dma_device_id": "system", 00:04:21.249 "dma_device_type": 1 00:04:21.249 }, 00:04:21.249 { 00:04:21.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.249 "dma_device_type": 2 00:04:21.249 } 00:04:21.249 ], 00:04:21.249 "driver_specific": {} 00:04:21.249 } 00:04:21.249 ]' 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.249 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.249 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.511 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.511 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.511 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.511 07:36:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.511 00:04:21.511 real 0m0.117s 00:04:21.511 user 0m0.062s 00:04:21.511 sys 0m0.021s 00:04:21.511 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.511 ************************************ 00:04:21.511 END TEST rpc_plugins 00:04:21.511 ************************************ 00:04:21.511 07:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.511 07:36:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.511 07:36:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.511 07:36:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.511 07:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.511 ************************************ 00:04:21.511 START TEST rpc_trace_cmd_test 00:04:21.511 ************************************ 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.511 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57060", 00:04:21.511 "tpoint_group_mask": "0x8", 00:04:21.511 "iscsi_conn": { 00:04:21.511 "mask": "0x2", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "scsi": { 00:04:21.511 "mask": "0x4", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "bdev": { 00:04:21.511 "mask": "0x8", 00:04:21.511 "tpoint_mask": "0xffffffffffffffff" 00:04:21.511 }, 00:04:21.511 "nvmf_rdma": { 00:04:21.511 "mask": "0x10", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "nvmf_tcp": { 00:04:21.511 "mask": "0x20", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "ftl": { 00:04:21.511 "mask": "0x40", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "blobfs": { 00:04:21.511 "mask": "0x80", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "dsa": { 00:04:21.511 "mask": "0x200", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "thread": { 00:04:21.511 "mask": "0x400", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "nvme_pcie": { 00:04:21.511 "mask": "0x800", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "iaa": { 00:04:21.511 "mask": "0x1000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "nvme_tcp": { 00:04:21.511 "mask": "0x2000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "bdev_nvme": { 00:04:21.511 "mask": "0x4000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "sock": { 00:04:21.511 "mask": "0x8000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "blob": { 00:04:21.511 "mask": "0x10000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "bdev_raid": { 00:04:21.511 "mask": "0x20000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 }, 00:04:21.511 "scheduler": { 00:04:21.511 "mask": "0x40000", 00:04:21.511 "tpoint_mask": "0x0" 00:04:21.511 } 00:04:21.511 }' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.511 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.774 ************************************ 00:04:21.774 END TEST rpc_trace_cmd_test 00:04:21.774 ************************************ 00:04:21.774 07:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.774 00:04:21.774 real 0m0.183s 00:04:21.774 user 0m0.144s 00:04:21.774 sys 0m0.026s 00:04:21.774 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.774 07:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 07:36:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.774 07:36:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.774 07:36:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.774 07:36:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.774 07:36:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.774 07:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 ************************************ 00:04:21.774 START TEST rpc_daemon_integrity 00:04:21.774 ************************************ 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.774 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.774 { 00:04:21.774 "name": "Malloc2", 00:04:21.774 "aliases": [ 00:04:21.774 "f7249031-b9db-4394-980e-242109a3dd03" 00:04:21.774 ], 00:04:21.774 "product_name": "Malloc disk", 00:04:21.774 "block_size": 512, 00:04:21.774 "num_blocks": 16384, 00:04:21.774 "uuid": "f7249031-b9db-4394-980e-242109a3dd03", 00:04:21.774 "assigned_rate_limits": { 00:04:21.774 "rw_ios_per_sec": 0, 00:04:21.774 "rw_mbytes_per_sec": 0, 00:04:21.774 "r_mbytes_per_sec": 0, 00:04:21.774 "w_mbytes_per_sec": 0 00:04:21.774 }, 00:04:21.774 "claimed": false, 00:04:21.774 "zoned": false, 00:04:21.774 "supported_io_types": { 00:04:21.774 "read": true, 00:04:21.774 "write": true, 00:04:21.774 "unmap": true, 00:04:21.774 "flush": true, 00:04:21.774 "reset": true, 00:04:21.774 "nvme_admin": false, 00:04:21.774 "nvme_io": false, 00:04:21.774 "nvme_io_md": false, 00:04:21.774 "write_zeroes": true, 00:04:21.774 "zcopy": true, 00:04:21.774 "get_zone_info": false, 00:04:21.774 "zone_management": false, 00:04:21.774 "zone_append": false, 00:04:21.774 "compare": false, 00:04:21.774 "compare_and_write": false, 00:04:21.774 "abort": true, 00:04:21.774 "seek_hole": false, 00:04:21.774 "seek_data": false, 00:04:21.774 "copy": true, 00:04:21.774 "nvme_iov_md": false 00:04:21.774 }, 00:04:21.774 "memory_domains": [ 00:04:21.774 { 00:04:21.774 "dma_device_id": "system", 00:04:21.774 "dma_device_type": 1 00:04:21.774 }, 00:04:21.774 { 00:04:21.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.774 "dma_device_type": 2 00:04:21.775 } 00:04:21.775 ], 00:04:21.775 "driver_specific": {} 00:04:21.775 } 00:04:21.775 ]' 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.775 [2024-11-29 07:36:11.647846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.775 [2024-11-29 07:36:11.647893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.775 [2024-11-29 07:36:11.647910] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:21.775 [2024-11-29 07:36:11.647921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.775 [2024-11-29 07:36:11.650014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.775 [2024-11-29 07:36:11.650051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.775 Passthru0 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.775 { 00:04:21.775 "name": "Malloc2", 00:04:21.775 "aliases": [ 00:04:21.775 "f7249031-b9db-4394-980e-242109a3dd03" 00:04:21.775 ], 00:04:21.775 "product_name": "Malloc disk", 00:04:21.775 "block_size": 512, 00:04:21.775 "num_blocks": 16384, 00:04:21.775 "uuid": "f7249031-b9db-4394-980e-242109a3dd03", 00:04:21.775 "assigned_rate_limits": { 00:04:21.775 "rw_ios_per_sec": 0, 00:04:21.775 "rw_mbytes_per_sec": 0, 00:04:21.775 "r_mbytes_per_sec": 0, 00:04:21.775 "w_mbytes_per_sec": 0 00:04:21.775 }, 00:04:21.775 "claimed": true, 00:04:21.775 "claim_type": "exclusive_write", 00:04:21.775 "zoned": false, 00:04:21.775 "supported_io_types": { 00:04:21.775 "read": true, 00:04:21.775 "write": true, 00:04:21.775 "unmap": true, 00:04:21.775 "flush": true, 00:04:21.775 "reset": true, 00:04:21.775 "nvme_admin": false, 00:04:21.775 "nvme_io": false, 00:04:21.775 "nvme_io_md": false, 00:04:21.775 "write_zeroes": true, 00:04:21.775 "zcopy": true, 00:04:21.775 "get_zone_info": false, 00:04:21.775 "zone_management": false, 00:04:21.775 "zone_append": false, 00:04:21.775 "compare": false, 00:04:21.775 "compare_and_write": false, 00:04:21.775 "abort": true, 00:04:21.775 "seek_hole": false, 00:04:21.775 "seek_data": false, 00:04:21.775 "copy": true, 00:04:21.775 "nvme_iov_md": false 00:04:21.775 }, 00:04:21.775 "memory_domains": [ 00:04:21.775 { 00:04:21.775 "dma_device_id": "system", 00:04:21.775 "dma_device_type": 1 00:04:21.775 }, 00:04:21.775 { 00:04:21.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.775 "dma_device_type": 2 00:04:21.775 } 00:04:21.775 ], 00:04:21.775 "driver_specific": {} 00:04:21.775 }, 00:04:21.775 { 00:04:21.775 "name": "Passthru0", 00:04:21.775 "aliases": [ 00:04:21.775 "5b5130a8-8192-56e4-b6ec-98f09c7881c7" 00:04:21.775 ], 00:04:21.775 "product_name": "passthru", 00:04:21.775 "block_size": 512, 00:04:21.775 "num_blocks": 16384, 00:04:21.775 "uuid": "5b5130a8-8192-56e4-b6ec-98f09c7881c7", 00:04:21.775 "assigned_rate_limits": { 00:04:21.775 "rw_ios_per_sec": 0, 00:04:21.775 "rw_mbytes_per_sec": 0, 00:04:21.775 "r_mbytes_per_sec": 0, 00:04:21.775 "w_mbytes_per_sec": 0 00:04:21.775 }, 00:04:21.775 "claimed": false, 00:04:21.775 "zoned": false, 00:04:21.775 "supported_io_types": { 00:04:21.775 "read": true, 00:04:21.775 "write": true, 00:04:21.775 "unmap": true, 00:04:21.775 "flush": true, 00:04:21.775 "reset": true, 00:04:21.775 "nvme_admin": false, 00:04:21.775 "nvme_io": false, 00:04:21.775 "nvme_io_md": false, 00:04:21.775 "write_zeroes": true, 00:04:21.775 "zcopy": true, 00:04:21.775 "get_zone_info": false, 00:04:21.775 "zone_management": false, 00:04:21.775 "zone_append": false, 00:04:21.775 "compare": false, 00:04:21.775 "compare_and_write": false, 00:04:21.775 "abort": true, 00:04:21.775 "seek_hole": false, 00:04:21.775 "seek_data": false, 00:04:21.775 "copy": true, 00:04:21.775 "nvme_iov_md": false 00:04:21.775 }, 00:04:21.775 "memory_domains": [ 00:04:21.775 { 00:04:21.775 "dma_device_id": "system", 00:04:21.775 "dma_device_type": 1 00:04:21.775 }, 00:04:21.775 { 00:04:21.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.775 "dma_device_type": 2 00:04:21.775 } 00:04:21.775 ], 00:04:21.775 "driver_specific": { 00:04:21.775 "passthru": { 00:04:21.775 "name": "Passthru0", 00:04:21.775 "base_bdev_name": "Malloc2" 00:04:21.775 } 00:04:21.775 } 00:04:21.775 } 00:04:21.775 ]' 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.775 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.036 ************************************ 00:04:22.036 END TEST rpc_daemon_integrity 00:04:22.036 ************************************ 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.036 00:04:22.036 real 0m0.250s 00:04:22.036 user 0m0.128s 00:04:22.036 sys 0m0.036s 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.036 07:36:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.036 07:36:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:22.036 07:36:11 rpc -- rpc/rpc.sh@84 -- # killprocess 57060 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 57060 ']' 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@958 -- # kill -0 57060 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57060 00:04:22.036 killing process with pid 57060 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57060' 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@973 -- # kill 57060 00:04:22.036 07:36:11 rpc -- common/autotest_common.sh@978 -- # wait 57060 00:04:23.413 00:04:23.413 real 0m3.252s 00:04:23.413 user 0m3.689s 00:04:23.413 sys 0m0.603s 00:04:23.413 07:36:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.413 07:36:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.413 ************************************ 00:04:23.413 END TEST rpc 00:04:23.413 ************************************ 00:04:23.413 07:36:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.413 07:36:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.413 07:36:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.413 07:36:13 -- common/autotest_common.sh@10 -- # set +x 00:04:23.413 ************************************ 00:04:23.413 START TEST skip_rpc 00:04:23.413 ************************************ 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.413 * Looking for test storage... 00:04:23.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.413 07:36:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.413 --rc genhtml_branch_coverage=1 00:04:23.413 --rc genhtml_function_coverage=1 00:04:23.413 --rc genhtml_legend=1 00:04:23.413 --rc geninfo_all_blocks=1 00:04:23.413 --rc geninfo_unexecuted_blocks=1 00:04:23.413 00:04:23.413 ' 00:04:23.413 07:36:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.414 --rc genhtml_branch_coverage=1 00:04:23.414 --rc genhtml_function_coverage=1 00:04:23.414 --rc genhtml_legend=1 00:04:23.414 --rc geninfo_all_blocks=1 00:04:23.414 --rc geninfo_unexecuted_blocks=1 00:04:23.414 00:04:23.414 ' 00:04:23.414 07:36:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.414 --rc genhtml_branch_coverage=1 00:04:23.414 --rc genhtml_function_coverage=1 00:04:23.414 --rc genhtml_legend=1 00:04:23.414 --rc geninfo_all_blocks=1 00:04:23.414 --rc geninfo_unexecuted_blocks=1 00:04:23.414 00:04:23.414 ' 00:04:23.414 07:36:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.414 --rc genhtml_branch_coverage=1 00:04:23.414 --rc genhtml_function_coverage=1 00:04:23.414 --rc genhtml_legend=1 00:04:23.414 --rc geninfo_all_blocks=1 00:04:23.414 --rc geninfo_unexecuted_blocks=1 00:04:23.414 00:04:23.414 ' 00:04:23.414 07:36:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.414 07:36:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.414 07:36:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.414 07:36:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.414 07:36:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.414 07:36:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.414 ************************************ 00:04:23.414 START TEST skip_rpc 00:04:23.414 ************************************ 00:04:23.414 07:36:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.414 07:36:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57267 00:04:23.414 07:36:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.414 07:36:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.414 07:36:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.414 [2024-11-29 07:36:13.239762] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:23.414 [2024-11-29 07:36:13.239939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57267 ] 00:04:23.674 [2024-11-29 07:36:13.391017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.674 [2024-11-29 07:36:13.485662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57267 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57267 ']' 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57267 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57267 00:04:29.001 killing process with pid 57267 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57267' 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57267 00:04:29.001 07:36:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57267 00:04:29.570 00:04:29.570 real 0m6.207s 00:04:29.570 user 0m5.855s 00:04:29.570 sys 0m0.256s 00:04:29.570 07:36:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.570 07:36:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.570 ************************************ 00:04:29.570 END TEST skip_rpc 00:04:29.570 ************************************ 00:04:29.570 07:36:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.570 07:36:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.570 07:36:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.570 07:36:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.570 ************************************ 00:04:29.570 START TEST skip_rpc_with_json 00:04:29.570 ************************************ 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57366 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57366 00:04:29.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57366 ']' 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.570 07:36:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.570 [2024-11-29 07:36:19.505226] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:29.570 [2024-11-29 07:36:19.505345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:04:29.830 [2024-11-29 07:36:19.659525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.830 [2024-11-29 07:36:19.737237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.397 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.397 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.397 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.397 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.397 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.655 [2024-11-29 07:36:20.340168] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.655 request: 00:04:30.655 { 00:04:30.655 "trtype": "tcp", 00:04:30.655 "method": "nvmf_get_transports", 00:04:30.655 "req_id": 1 00:04:30.655 } 00:04:30.655 Got JSON-RPC error response 00:04:30.655 response: 00:04:30.655 { 00:04:30.655 "code": -19, 00:04:30.655 "message": "No such device" 00:04:30.655 } 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.655 [2024-11-29 07:36:20.352257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.655 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.655 { 00:04:30.655 "subsystems": [ 00:04:30.655 { 00:04:30.655 "subsystem": "fsdev", 00:04:30.655 "config": [ 00:04:30.655 { 00:04:30.655 "method": "fsdev_set_opts", 00:04:30.655 "params": { 00:04:30.655 "fsdev_io_pool_size": 65535, 00:04:30.655 "fsdev_io_cache_size": 256 00:04:30.655 } 00:04:30.655 } 00:04:30.655 ] 00:04:30.655 }, 00:04:30.655 { 00:04:30.655 "subsystem": "keyring", 00:04:30.655 "config": [] 00:04:30.655 }, 00:04:30.655 { 00:04:30.655 "subsystem": "iobuf", 00:04:30.655 "config": [ 00:04:30.655 { 00:04:30.655 "method": "iobuf_set_options", 00:04:30.655 "params": { 00:04:30.655 "small_pool_count": 8192, 00:04:30.655 "large_pool_count": 1024, 00:04:30.655 "small_bufsize": 8192, 00:04:30.655 "large_bufsize": 135168, 00:04:30.656 "enable_numa": false 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "sock", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "sock_set_default_impl", 00:04:30.656 "params": { 00:04:30.656 "impl_name": "posix" 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "sock_impl_set_options", 00:04:30.656 "params": { 00:04:30.656 "impl_name": "ssl", 00:04:30.656 "recv_buf_size": 4096, 00:04:30.656 "send_buf_size": 4096, 00:04:30.656 "enable_recv_pipe": true, 00:04:30.656 "enable_quickack": false, 00:04:30.656 "enable_placement_id": 0, 00:04:30.656 "enable_zerocopy_send_server": true, 00:04:30.656 "enable_zerocopy_send_client": false, 00:04:30.656 "zerocopy_threshold": 0, 00:04:30.656 "tls_version": 0, 00:04:30.656 "enable_ktls": false 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "sock_impl_set_options", 00:04:30.656 "params": { 00:04:30.656 "impl_name": "posix", 00:04:30.656 "recv_buf_size": 2097152, 00:04:30.656 "send_buf_size": 2097152, 00:04:30.656 "enable_recv_pipe": true, 00:04:30.656 "enable_quickack": false, 00:04:30.656 "enable_placement_id": 0, 00:04:30.656 "enable_zerocopy_send_server": true, 00:04:30.656 "enable_zerocopy_send_client": false, 00:04:30.656 "zerocopy_threshold": 0, 00:04:30.656 "tls_version": 0, 00:04:30.656 "enable_ktls": false 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "vmd", 00:04:30.656 "config": [] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "accel", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "accel_set_options", 00:04:30.656 "params": { 00:04:30.656 "small_cache_size": 128, 00:04:30.656 "large_cache_size": 16, 00:04:30.656 "task_count": 2048, 00:04:30.656 "sequence_count": 2048, 00:04:30.656 "buf_count": 2048 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "bdev", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "bdev_set_options", 00:04:30.656 "params": { 00:04:30.656 "bdev_io_pool_size": 65535, 00:04:30.656 "bdev_io_cache_size": 256, 00:04:30.656 "bdev_auto_examine": true, 00:04:30.656 "iobuf_small_cache_size": 128, 00:04:30.656 "iobuf_large_cache_size": 16 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "bdev_raid_set_options", 00:04:30.656 "params": { 00:04:30.656 "process_window_size_kb": 1024, 00:04:30.656 "process_max_bandwidth_mb_sec": 0 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "bdev_iscsi_set_options", 00:04:30.656 "params": { 00:04:30.656 "timeout_sec": 30 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "bdev_nvme_set_options", 00:04:30.656 "params": { 00:04:30.656 "action_on_timeout": "none", 00:04:30.656 "timeout_us": 0, 00:04:30.656 "timeout_admin_us": 0, 00:04:30.656 "keep_alive_timeout_ms": 10000, 00:04:30.656 "arbitration_burst": 0, 00:04:30.656 "low_priority_weight": 0, 00:04:30.656 "medium_priority_weight": 0, 00:04:30.656 "high_priority_weight": 0, 00:04:30.656 "nvme_adminq_poll_period_us": 10000, 00:04:30.656 "nvme_ioq_poll_period_us": 0, 00:04:30.656 "io_queue_requests": 0, 00:04:30.656 "delay_cmd_submit": true, 00:04:30.656 "transport_retry_count": 4, 00:04:30.656 "bdev_retry_count": 3, 00:04:30.656 "transport_ack_timeout": 0, 00:04:30.656 "ctrlr_loss_timeout_sec": 0, 00:04:30.656 "reconnect_delay_sec": 0, 00:04:30.656 "fast_io_fail_timeout_sec": 0, 00:04:30.656 "disable_auto_failback": false, 00:04:30.656 "generate_uuids": false, 00:04:30.656 "transport_tos": 0, 00:04:30.656 "nvme_error_stat": false, 00:04:30.656 "rdma_srq_size": 0, 00:04:30.656 "io_path_stat": false, 00:04:30.656 "allow_accel_sequence": false, 00:04:30.656 "rdma_max_cq_size": 0, 00:04:30.656 "rdma_cm_event_timeout_ms": 0, 00:04:30.656 "dhchap_digests": [ 00:04:30.656 "sha256", 00:04:30.656 "sha384", 00:04:30.656 "sha512" 00:04:30.656 ], 00:04:30.656 "dhchap_dhgroups": [ 00:04:30.656 "null", 00:04:30.656 "ffdhe2048", 00:04:30.656 "ffdhe3072", 00:04:30.656 "ffdhe4096", 00:04:30.656 "ffdhe6144", 00:04:30.656 "ffdhe8192" 00:04:30.656 ] 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "bdev_nvme_set_hotplug", 00:04:30.656 "params": { 00:04:30.656 "period_us": 100000, 00:04:30.656 "enable": false 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "bdev_wait_for_examine" 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "scsi", 00:04:30.656 "config": null 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "scheduler", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "framework_set_scheduler", 00:04:30.656 "params": { 00:04:30.656 "name": "static" 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "vhost_scsi", 00:04:30.656 "config": [] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "vhost_blk", 00:04:30.656 "config": [] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "ublk", 00:04:30.656 "config": [] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "nbd", 00:04:30.656 "config": [] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "nvmf", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "nvmf_set_config", 00:04:30.656 "params": { 00:04:30.656 "discovery_filter": "match_any", 00:04:30.656 "admin_cmd_passthru": { 00:04:30.656 "identify_ctrlr": false 00:04:30.656 }, 00:04:30.656 "dhchap_digests": [ 00:04:30.656 "sha256", 00:04:30.656 "sha384", 00:04:30.656 "sha512" 00:04:30.656 ], 00:04:30.656 "dhchap_dhgroups": [ 00:04:30.656 "null", 00:04:30.656 "ffdhe2048", 00:04:30.656 "ffdhe3072", 00:04:30.656 "ffdhe4096", 00:04:30.656 "ffdhe6144", 00:04:30.656 "ffdhe8192" 00:04:30.656 ] 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "nvmf_set_max_subsystems", 00:04:30.656 "params": { 00:04:30.656 "max_subsystems": 1024 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "nvmf_set_crdt", 00:04:30.656 "params": { 00:04:30.656 "crdt1": 0, 00:04:30.656 "crdt2": 0, 00:04:30.656 "crdt3": 0 00:04:30.656 } 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "method": "nvmf_create_transport", 00:04:30.656 "params": { 00:04:30.656 "trtype": "TCP", 00:04:30.656 "max_queue_depth": 128, 00:04:30.656 "max_io_qpairs_per_ctrlr": 127, 00:04:30.656 "in_capsule_data_size": 4096, 00:04:30.656 "max_io_size": 131072, 00:04:30.656 "io_unit_size": 131072, 00:04:30.656 "max_aq_depth": 128, 00:04:30.656 "num_shared_buffers": 511, 00:04:30.656 "buf_cache_size": 4294967295, 00:04:30.656 "dif_insert_or_strip": false, 00:04:30.656 "zcopy": false, 00:04:30.656 "c2h_success": true, 00:04:30.656 "sock_priority": 0, 00:04:30.656 "abort_timeout_sec": 1, 00:04:30.656 "ack_timeout": 0, 00:04:30.656 "data_wr_pool_size": 0 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 }, 00:04:30.656 { 00:04:30.656 "subsystem": "iscsi", 00:04:30.656 "config": [ 00:04:30.656 { 00:04:30.656 "method": "iscsi_set_options", 00:04:30.656 "params": { 00:04:30.656 "node_base": "iqn.2016-06.io.spdk", 00:04:30.656 "max_sessions": 128, 00:04:30.656 "max_connections_per_session": 2, 00:04:30.656 "max_queue_depth": 64, 00:04:30.656 "default_time2wait": 2, 00:04:30.656 "default_time2retain": 20, 00:04:30.656 "first_burst_length": 8192, 00:04:30.656 "immediate_data": true, 00:04:30.656 "allow_duplicated_isid": false, 00:04:30.656 "error_recovery_level": 0, 00:04:30.656 "nop_timeout": 60, 00:04:30.656 "nop_in_interval": 30, 00:04:30.656 "disable_chap": false, 00:04:30.656 "require_chap": false, 00:04:30.656 "mutual_chap": false, 00:04:30.656 "chap_group": 0, 00:04:30.656 "max_large_datain_per_connection": 64, 00:04:30.656 "max_r2t_per_connection": 4, 00:04:30.656 "pdu_pool_size": 36864, 00:04:30.656 "immediate_data_pool_size": 16384, 00:04:30.656 "data_out_pool_size": 2048 00:04:30.656 } 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 } 00:04:30.656 ] 00:04:30.656 } 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57366 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57366 ']' 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57366 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.656 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57366 00:04:30.656 killing process with pid 57366 00:04:30.657 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.657 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.657 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57366' 00:04:30.657 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57366 00:04:30.657 07:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57366 00:04:32.032 07:36:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57400 00:04:32.032 07:36:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.032 07:36:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57400 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57400 ']' 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57400 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57400 00:04:37.290 killing process with pid 57400 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57400' 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57400 00:04:37.290 07:36:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57400 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.225 ************************************ 00:04:38.225 END TEST skip_rpc_with_json 00:04:38.225 ************************************ 00:04:38.225 00:04:38.225 real 0m8.463s 00:04:38.225 user 0m8.126s 00:04:38.225 sys 0m0.555s 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.225 07:36:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.225 07:36:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.225 07:36:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.225 07:36:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.225 ************************************ 00:04:38.225 START TEST skip_rpc_with_delay 00:04:38.225 ************************************ 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.225 07:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.225 [2024-11-29 07:36:28.022197] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.225 ************************************ 00:04:38.225 END TEST skip_rpc_with_delay 00:04:38.225 ************************************ 00:04:38.225 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:38.225 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.225 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.225 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.225 00:04:38.225 real 0m0.122s 00:04:38.225 user 0m0.069s 00:04:38.225 sys 0m0.052s 00:04:38.226 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.226 07:36:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.226 07:36:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.226 07:36:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.226 07:36:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.226 07:36:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.226 07:36:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.226 07:36:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.226 ************************************ 00:04:38.226 START TEST exit_on_failed_rpc_init 00:04:38.226 ************************************ 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57522 00:04:38.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57522 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.226 07:36:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.485 [2024-11-29 07:36:28.191634] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:38.485 [2024-11-29 07:36:28.191753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57522 ] 00:04:38.485 [2024-11-29 07:36:28.345654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.485 [2024-11-29 07:36:28.421329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.420 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.421 [2024-11-29 07:36:29.093863] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:39.421 [2024-11-29 07:36:29.094098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57539 ] 00:04:39.421 [2024-11-29 07:36:29.254991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.421 [2024-11-29 07:36:29.347309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.421 [2024-11-29 07:36:29.347382] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.421 [2024-11-29 07:36:29.347395] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.421 [2024-11-29 07:36:29.347407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57522 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57522 ']' 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57522 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57522 00:04:39.679 killing process with pid 57522 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57522' 00:04:39.679 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57522 00:04:39.680 07:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57522 00:04:41.056 ************************************ 00:04:41.056 END TEST exit_on_failed_rpc_init 00:04:41.056 ************************************ 00:04:41.056 00:04:41.056 real 0m2.609s 00:04:41.056 user 0m2.915s 00:04:41.056 sys 0m0.393s 00:04:41.056 07:36:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.056 07:36:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.056 07:36:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.056 ************************************ 00:04:41.056 END TEST skip_rpc 00:04:41.056 ************************************ 00:04:41.056 00:04:41.056 real 0m17.756s 00:04:41.056 user 0m17.109s 00:04:41.056 sys 0m1.427s 00:04:41.056 07:36:30 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.056 07:36:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.056 07:36:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.056 07:36:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.056 07:36:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.056 07:36:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.056 ************************************ 00:04:41.056 START TEST rpc_client 00:04:41.056 ************************************ 00:04:41.056 07:36:30 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.056 * Looking for test storage... 00:04:41.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.056 07:36:30 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.056 07:36:30 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.056 07:36:30 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.056 07:36:30 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.056 07:36:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.057 07:36:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:41.057 07:36:30 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.057 07:36:30 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.057 --rc genhtml_branch_coverage=1 00:04:41.057 --rc genhtml_function_coverage=1 00:04:41.057 --rc genhtml_legend=1 00:04:41.057 --rc geninfo_all_blocks=1 00:04:41.057 --rc geninfo_unexecuted_blocks=1 00:04:41.057 00:04:41.057 ' 00:04:41.057 07:36:30 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.057 --rc genhtml_branch_coverage=1 00:04:41.057 --rc genhtml_function_coverage=1 00:04:41.057 --rc genhtml_legend=1 00:04:41.057 --rc geninfo_all_blocks=1 00:04:41.057 --rc geninfo_unexecuted_blocks=1 00:04:41.057 00:04:41.057 ' 00:04:41.057 07:36:30 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.057 --rc genhtml_branch_coverage=1 00:04:41.057 --rc genhtml_function_coverage=1 00:04:41.057 --rc genhtml_legend=1 00:04:41.057 --rc geninfo_all_blocks=1 00:04:41.057 --rc geninfo_unexecuted_blocks=1 00:04:41.057 00:04:41.057 ' 00:04:41.057 07:36:30 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.057 --rc genhtml_branch_coverage=1 00:04:41.057 --rc genhtml_function_coverage=1 00:04:41.057 --rc genhtml_legend=1 00:04:41.057 --rc geninfo_all_blocks=1 00:04:41.057 --rc geninfo_unexecuted_blocks=1 00:04:41.057 00:04:41.057 ' 00:04:41.057 07:36:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.057 OK 00:04:41.316 07:36:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.316 00:04:41.316 real 0m0.189s 00:04:41.316 user 0m0.120s 00:04:41.316 sys 0m0.076s 00:04:41.316 ************************************ 00:04:41.316 END TEST rpc_client 00:04:41.316 ************************************ 00:04:41.316 07:36:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.316 07:36:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.316 07:36:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.317 07:36:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.317 07:36:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.317 07:36:31 -- common/autotest_common.sh@10 -- # set +x 00:04:41.317 ************************************ 00:04:41.317 START TEST json_config 00:04:41.317 ************************************ 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.317 07:36:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.317 07:36:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.317 07:36:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.317 07:36:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.317 07:36:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.317 07:36:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.317 07:36:31 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.317 07:36:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.317 07:36:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.317 07:36:31 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.317 07:36:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.317 07:36:31 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.317 07:36:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.317 07:36:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.317 07:36:31 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.317 --rc genhtml_branch_coverage=1 00:04:41.317 --rc genhtml_function_coverage=1 00:04:41.317 --rc genhtml_legend=1 00:04:41.317 --rc geninfo_all_blocks=1 00:04:41.317 --rc geninfo_unexecuted_blocks=1 00:04:41.317 00:04:41.317 ' 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.317 --rc genhtml_branch_coverage=1 00:04:41.317 --rc genhtml_function_coverage=1 00:04:41.317 --rc genhtml_legend=1 00:04:41.317 --rc geninfo_all_blocks=1 00:04:41.317 --rc geninfo_unexecuted_blocks=1 00:04:41.317 00:04:41.317 ' 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.317 --rc genhtml_branch_coverage=1 00:04:41.317 --rc genhtml_function_coverage=1 00:04:41.317 --rc genhtml_legend=1 00:04:41.317 --rc geninfo_all_blocks=1 00:04:41.317 --rc geninfo_unexecuted_blocks=1 00:04:41.317 00:04:41.317 ' 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.317 --rc genhtml_branch_coverage=1 00:04:41.317 --rc genhtml_function_coverage=1 00:04:41.317 --rc genhtml_legend=1 00:04:41.317 --rc geninfo_all_blocks=1 00:04:41.317 --rc geninfo_unexecuted_blocks=1 00:04:41.317 00:04:41.317 ' 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.317 07:36:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.317 07:36:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.317 07:36:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.317 07:36:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.317 07:36:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.317 07:36:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.317 07:36:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.317 07:36:31 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.317 07:36:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.317 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.317 07:36:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.317 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.317 07:36:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.317 00:04:41.317 real 0m0.145s 00:04:41.317 user 0m0.088s 00:04:41.317 sys 0m0.061s 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.317 07:36:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.317 ************************************ 00:04:41.317 END TEST json_config 00:04:41.317 ************************************ 00:04:41.317 07:36:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.317 07:36:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.317 07:36:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.317 07:36:31 -- common/autotest_common.sh@10 -- # set +x 00:04:41.579 ************************************ 00:04:41.579 START TEST json_config_extra_key 00:04:41.579 ************************************ 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.579 --rc genhtml_branch_coverage=1 00:04:41.579 --rc genhtml_function_coverage=1 00:04:41.579 --rc genhtml_legend=1 00:04:41.579 --rc geninfo_all_blocks=1 00:04:41.579 --rc geninfo_unexecuted_blocks=1 00:04:41.579 00:04:41.579 ' 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.579 --rc genhtml_branch_coverage=1 00:04:41.579 --rc genhtml_function_coverage=1 00:04:41.579 --rc genhtml_legend=1 00:04:41.579 --rc geninfo_all_blocks=1 00:04:41.579 --rc geninfo_unexecuted_blocks=1 00:04:41.579 00:04:41.579 ' 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.579 --rc genhtml_branch_coverage=1 00:04:41.579 --rc genhtml_function_coverage=1 00:04:41.579 --rc genhtml_legend=1 00:04:41.579 --rc geninfo_all_blocks=1 00:04:41.579 --rc geninfo_unexecuted_blocks=1 00:04:41.579 00:04:41.579 ' 00:04:41.579 07:36:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.579 --rc genhtml_branch_coverage=1 00:04:41.579 --rc genhtml_function_coverage=1 00:04:41.579 --rc genhtml_legend=1 00:04:41.579 --rc geninfo_all_blocks=1 00:04:41.579 --rc geninfo_unexecuted_blocks=1 00:04:41.579 00:04:41.579 ' 00:04:41.579 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=39fc8c46-1855-47aa-88c4-9fe997d49a3f 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.579 07:36:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.579 07:36:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.579 07:36:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.579 07:36:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.579 07:36:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.579 07:36:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.580 07:36:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.580 07:36:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.580 INFO: launching applications... 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.580 07:36:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57728 00:04:41.580 Waiting for target to run... 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57728 /var/tmp/spdk_tgt.sock 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57728 ']' 00:04:41.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.580 07:36:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.580 07:36:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.580 [2024-11-29 07:36:31.486408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:41.580 [2024-11-29 07:36:31.486544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57728 ] 00:04:42.153 [2024-11-29 07:36:31.811311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.153 [2024-11-29 07:36:31.900373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.725 07:36:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.725 07:36:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.725 00:04:42.725 INFO: shutting down applications... 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.725 07:36:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.725 07:36:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57728 ]] 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57728 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57728 00:04:42.725 07:36:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.984 07:36:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.984 07:36:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.984 07:36:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57728 00:04:42.984 07:36:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.551 07:36:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.551 07:36:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.551 07:36:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57728 00:04:43.551 07:36:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57728 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:44.118 SPDK target shutdown done 00:04:44.118 Success 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.118 07:36:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.118 07:36:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:44.118 ************************************ 00:04:44.118 END TEST json_config_extra_key 00:04:44.118 ************************************ 00:04:44.118 00:04:44.118 real 0m2.639s 00:04:44.118 user 0m2.408s 00:04:44.118 sys 0m0.402s 00:04:44.118 07:36:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.118 07:36:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.118 07:36:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.118 07:36:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.118 07:36:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.118 07:36:33 -- common/autotest_common.sh@10 -- # set +x 00:04:44.118 ************************************ 00:04:44.118 START TEST alias_rpc 00:04:44.118 ************************************ 00:04:44.118 07:36:33 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.118 * Looking for test storage... 00:04:44.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:44.118 07:36:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.118 07:36:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.118 07:36:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.379 07:36:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.379 07:36:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.380 07:36:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.380 --rc genhtml_branch_coverage=1 00:04:44.380 --rc genhtml_function_coverage=1 00:04:44.380 --rc genhtml_legend=1 00:04:44.380 --rc geninfo_all_blocks=1 00:04:44.380 --rc geninfo_unexecuted_blocks=1 00:04:44.380 00:04:44.380 ' 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.380 --rc genhtml_branch_coverage=1 00:04:44.380 --rc genhtml_function_coverage=1 00:04:44.380 --rc genhtml_legend=1 00:04:44.380 --rc geninfo_all_blocks=1 00:04:44.380 --rc geninfo_unexecuted_blocks=1 00:04:44.380 00:04:44.380 ' 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.380 --rc genhtml_branch_coverage=1 00:04:44.380 --rc genhtml_function_coverage=1 00:04:44.380 --rc genhtml_legend=1 00:04:44.380 --rc geninfo_all_blocks=1 00:04:44.380 --rc geninfo_unexecuted_blocks=1 00:04:44.380 00:04:44.380 ' 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.380 --rc genhtml_branch_coverage=1 00:04:44.380 --rc genhtml_function_coverage=1 00:04:44.380 --rc genhtml_legend=1 00:04:44.380 --rc geninfo_all_blocks=1 00:04:44.380 --rc geninfo_unexecuted_blocks=1 00:04:44.380 00:04:44.380 ' 00:04:44.380 07:36:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.380 07:36:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57815 00:04:44.380 07:36:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57815 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57815 ']' 00:04:44.380 07:36:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.380 07:36:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.380 [2024-11-29 07:36:34.178650] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:44.380 [2024-11-29 07:36:34.178876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57815 ] 00:04:44.641 [2024-11-29 07:36:34.339523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.641 [2024-11-29 07:36:34.440100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.212 07:36:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.212 07:36:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.212 07:36:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.473 07:36:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57815 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57815 ']' 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57815 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57815 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57815' 00:04:45.473 killing process with pid 57815 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 57815 00:04:45.473 07:36:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 57815 00:04:46.857 ************************************ 00:04:46.857 END TEST alias_rpc 00:04:46.857 ************************************ 00:04:46.857 00:04:46.857 real 0m2.752s 00:04:46.857 user 0m2.870s 00:04:46.857 sys 0m0.382s 00:04:46.857 07:36:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.857 07:36:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.857 07:36:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:46.857 07:36:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.857 07:36:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.857 07:36:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.857 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:04:46.857 ************************************ 00:04:46.857 START TEST spdkcli_tcp 00:04:46.857 ************************************ 00:04:46.857 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:47.118 * Looking for test storage... 00:04:47.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:47.118 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.119 07:36:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.119 --rc genhtml_branch_coverage=1 00:04:47.119 --rc genhtml_function_coverage=1 00:04:47.119 --rc genhtml_legend=1 00:04:47.119 --rc geninfo_all_blocks=1 00:04:47.119 --rc geninfo_unexecuted_blocks=1 00:04:47.119 00:04:47.119 ' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.119 --rc genhtml_branch_coverage=1 00:04:47.119 --rc genhtml_function_coverage=1 00:04:47.119 --rc genhtml_legend=1 00:04:47.119 --rc geninfo_all_blocks=1 00:04:47.119 --rc geninfo_unexecuted_blocks=1 00:04:47.119 00:04:47.119 ' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.119 --rc genhtml_branch_coverage=1 00:04:47.119 --rc genhtml_function_coverage=1 00:04:47.119 --rc genhtml_legend=1 00:04:47.119 --rc geninfo_all_blocks=1 00:04:47.119 --rc geninfo_unexecuted_blocks=1 00:04:47.119 00:04:47.119 ' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.119 --rc genhtml_branch_coverage=1 00:04:47.119 --rc genhtml_function_coverage=1 00:04:47.119 --rc genhtml_legend=1 00:04:47.119 --rc geninfo_all_blocks=1 00:04:47.119 --rc geninfo_unexecuted_blocks=1 00:04:47.119 00:04:47.119 ' 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57911 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57911 00:04:47.119 07:36:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57911 ']' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.119 07:36:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.119 [2024-11-29 07:36:36.987831] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:47.119 [2024-11-29 07:36:36.988290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57911 ] 00:04:47.382 [2024-11-29 07:36:37.149098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.382 [2024-11-29 07:36:37.246062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.382 [2024-11-29 07:36:37.246143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.955 07:36:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.955 07:36:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:47.955 07:36:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:47.955 07:36:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57928 00:04:47.955 07:36:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.217 [ 00:04:48.217 "bdev_malloc_delete", 00:04:48.217 "bdev_malloc_create", 00:04:48.217 "bdev_null_resize", 00:04:48.217 "bdev_null_delete", 00:04:48.217 "bdev_null_create", 00:04:48.217 "bdev_nvme_cuse_unregister", 00:04:48.217 "bdev_nvme_cuse_register", 00:04:48.217 "bdev_opal_new_user", 00:04:48.217 "bdev_opal_set_lock_state", 00:04:48.217 "bdev_opal_delete", 00:04:48.217 "bdev_opal_get_info", 00:04:48.217 "bdev_opal_create", 00:04:48.217 "bdev_nvme_opal_revert", 00:04:48.217 "bdev_nvme_opal_init", 00:04:48.217 "bdev_nvme_send_cmd", 00:04:48.217 "bdev_nvme_set_keys", 00:04:48.217 "bdev_nvme_get_path_iostat", 00:04:48.217 "bdev_nvme_get_mdns_discovery_info", 00:04:48.217 "bdev_nvme_stop_mdns_discovery", 00:04:48.217 "bdev_nvme_start_mdns_discovery", 00:04:48.217 "bdev_nvme_set_multipath_policy", 00:04:48.217 "bdev_nvme_set_preferred_path", 00:04:48.217 "bdev_nvme_get_io_paths", 00:04:48.217 "bdev_nvme_remove_error_injection", 00:04:48.217 "bdev_nvme_add_error_injection", 00:04:48.217 "bdev_nvme_get_discovery_info", 00:04:48.217 "bdev_nvme_stop_discovery", 00:04:48.217 "bdev_nvme_start_discovery", 00:04:48.217 "bdev_nvme_get_controller_health_info", 00:04:48.217 "bdev_nvme_disable_controller", 00:04:48.217 "bdev_nvme_enable_controller", 00:04:48.217 "bdev_nvme_reset_controller", 00:04:48.217 "bdev_nvme_get_transport_statistics", 00:04:48.217 "bdev_nvme_apply_firmware", 00:04:48.217 "bdev_nvme_detach_controller", 00:04:48.217 "bdev_nvme_get_controllers", 00:04:48.217 "bdev_nvme_attach_controller", 00:04:48.217 "bdev_nvme_set_hotplug", 00:04:48.217 "bdev_nvme_set_options", 00:04:48.217 "bdev_passthru_delete", 00:04:48.217 "bdev_passthru_create", 00:04:48.217 "bdev_lvol_set_parent_bdev", 00:04:48.217 "bdev_lvol_set_parent", 00:04:48.217 "bdev_lvol_check_shallow_copy", 00:04:48.217 "bdev_lvol_start_shallow_copy", 00:04:48.217 "bdev_lvol_grow_lvstore", 00:04:48.217 "bdev_lvol_get_lvols", 00:04:48.217 "bdev_lvol_get_lvstores", 00:04:48.217 "bdev_lvol_delete", 00:04:48.217 "bdev_lvol_set_read_only", 00:04:48.217 "bdev_lvol_resize", 00:04:48.217 "bdev_lvol_decouple_parent", 00:04:48.217 "bdev_lvol_inflate", 00:04:48.217 "bdev_lvol_rename", 00:04:48.217 "bdev_lvol_clone_bdev", 00:04:48.217 "bdev_lvol_clone", 00:04:48.217 "bdev_lvol_snapshot", 00:04:48.217 "bdev_lvol_create", 00:04:48.217 "bdev_lvol_delete_lvstore", 00:04:48.217 "bdev_lvol_rename_lvstore", 00:04:48.217 "bdev_lvol_create_lvstore", 00:04:48.217 "bdev_raid_set_options", 00:04:48.217 "bdev_raid_remove_base_bdev", 00:04:48.217 "bdev_raid_add_base_bdev", 00:04:48.217 "bdev_raid_delete", 00:04:48.217 "bdev_raid_create", 00:04:48.217 "bdev_raid_get_bdevs", 00:04:48.217 "bdev_error_inject_error", 00:04:48.217 "bdev_error_delete", 00:04:48.217 "bdev_error_create", 00:04:48.218 "bdev_split_delete", 00:04:48.218 "bdev_split_create", 00:04:48.218 "bdev_delay_delete", 00:04:48.218 "bdev_delay_create", 00:04:48.218 "bdev_delay_update_latency", 00:04:48.218 "bdev_zone_block_delete", 00:04:48.218 "bdev_zone_block_create", 00:04:48.218 "blobfs_create", 00:04:48.218 "blobfs_detect", 00:04:48.218 "blobfs_set_cache_size", 00:04:48.218 "bdev_xnvme_delete", 00:04:48.218 "bdev_xnvme_create", 00:04:48.218 "bdev_aio_delete", 00:04:48.218 "bdev_aio_rescan", 00:04:48.218 "bdev_aio_create", 00:04:48.218 "bdev_ftl_set_property", 00:04:48.218 "bdev_ftl_get_properties", 00:04:48.218 "bdev_ftl_get_stats", 00:04:48.218 "bdev_ftl_unmap", 00:04:48.218 "bdev_ftl_unload", 00:04:48.218 "bdev_ftl_delete", 00:04:48.218 "bdev_ftl_load", 00:04:48.218 "bdev_ftl_create", 00:04:48.218 "bdev_virtio_attach_controller", 00:04:48.218 "bdev_virtio_scsi_get_devices", 00:04:48.218 "bdev_virtio_detach_controller", 00:04:48.218 "bdev_virtio_blk_set_hotplug", 00:04:48.218 "bdev_iscsi_delete", 00:04:48.218 "bdev_iscsi_create", 00:04:48.218 "bdev_iscsi_set_options", 00:04:48.218 "accel_error_inject_error", 00:04:48.218 "ioat_scan_accel_module", 00:04:48.218 "dsa_scan_accel_module", 00:04:48.218 "iaa_scan_accel_module", 00:04:48.218 "keyring_file_remove_key", 00:04:48.218 "keyring_file_add_key", 00:04:48.218 "keyring_linux_set_options", 00:04:48.218 "fsdev_aio_delete", 00:04:48.218 "fsdev_aio_create", 00:04:48.218 "iscsi_get_histogram", 00:04:48.218 "iscsi_enable_histogram", 00:04:48.218 "iscsi_set_options", 00:04:48.218 "iscsi_get_auth_groups", 00:04:48.218 "iscsi_auth_group_remove_secret", 00:04:48.218 "iscsi_auth_group_add_secret", 00:04:48.218 "iscsi_delete_auth_group", 00:04:48.218 "iscsi_create_auth_group", 00:04:48.218 "iscsi_set_discovery_auth", 00:04:48.218 "iscsi_get_options", 00:04:48.218 "iscsi_target_node_request_logout", 00:04:48.218 "iscsi_target_node_set_redirect", 00:04:48.218 "iscsi_target_node_set_auth", 00:04:48.218 "iscsi_target_node_add_lun", 00:04:48.218 "iscsi_get_stats", 00:04:48.218 "iscsi_get_connections", 00:04:48.218 "iscsi_portal_group_set_auth", 00:04:48.218 "iscsi_start_portal_group", 00:04:48.218 "iscsi_delete_portal_group", 00:04:48.218 "iscsi_create_portal_group", 00:04:48.218 "iscsi_get_portal_groups", 00:04:48.218 "iscsi_delete_target_node", 00:04:48.218 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.218 "iscsi_target_node_add_pg_ig_maps", 00:04:48.218 "iscsi_create_target_node", 00:04:48.218 "iscsi_get_target_nodes", 00:04:48.218 "iscsi_delete_initiator_group", 00:04:48.218 "iscsi_initiator_group_remove_initiators", 00:04:48.218 "iscsi_initiator_group_add_initiators", 00:04:48.218 "iscsi_create_initiator_group", 00:04:48.218 "iscsi_get_initiator_groups", 00:04:48.218 "nvmf_set_crdt", 00:04:48.218 "nvmf_set_config", 00:04:48.218 "nvmf_set_max_subsystems", 00:04:48.218 "nvmf_stop_mdns_prr", 00:04:48.218 "nvmf_publish_mdns_prr", 00:04:48.218 "nvmf_subsystem_get_listeners", 00:04:48.218 "nvmf_subsystem_get_qpairs", 00:04:48.218 "nvmf_subsystem_get_controllers", 00:04:48.218 "nvmf_get_stats", 00:04:48.218 "nvmf_get_transports", 00:04:48.218 "nvmf_create_transport", 00:04:48.218 "nvmf_get_targets", 00:04:48.218 "nvmf_delete_target", 00:04:48.218 "nvmf_create_target", 00:04:48.218 "nvmf_subsystem_allow_any_host", 00:04:48.218 "nvmf_subsystem_set_keys", 00:04:48.218 "nvmf_subsystem_remove_host", 00:04:48.218 "nvmf_subsystem_add_host", 00:04:48.218 "nvmf_ns_remove_host", 00:04:48.218 "nvmf_ns_add_host", 00:04:48.218 "nvmf_subsystem_remove_ns", 00:04:48.218 "nvmf_subsystem_set_ns_ana_group", 00:04:48.218 "nvmf_subsystem_add_ns", 00:04:48.218 "nvmf_subsystem_listener_set_ana_state", 00:04:48.218 "nvmf_discovery_get_referrals", 00:04:48.218 "nvmf_discovery_remove_referral", 00:04:48.218 "nvmf_discovery_add_referral", 00:04:48.218 "nvmf_subsystem_remove_listener", 00:04:48.218 "nvmf_subsystem_add_listener", 00:04:48.218 "nvmf_delete_subsystem", 00:04:48.218 "nvmf_create_subsystem", 00:04:48.218 "nvmf_get_subsystems", 00:04:48.218 "env_dpdk_get_mem_stats", 00:04:48.218 "nbd_get_disks", 00:04:48.218 "nbd_stop_disk", 00:04:48.218 "nbd_start_disk", 00:04:48.218 "ublk_recover_disk", 00:04:48.218 "ublk_get_disks", 00:04:48.218 "ublk_stop_disk", 00:04:48.218 "ublk_start_disk", 00:04:48.218 "ublk_destroy_target", 00:04:48.218 "ublk_create_target", 00:04:48.218 "virtio_blk_create_transport", 00:04:48.218 "virtio_blk_get_transports", 00:04:48.218 "vhost_controller_set_coalescing", 00:04:48.218 "vhost_get_controllers", 00:04:48.218 "vhost_delete_controller", 00:04:48.218 "vhost_create_blk_controller", 00:04:48.218 "vhost_scsi_controller_remove_target", 00:04:48.218 "vhost_scsi_controller_add_target", 00:04:48.218 "vhost_start_scsi_controller", 00:04:48.218 "vhost_create_scsi_controller", 00:04:48.218 "thread_set_cpumask", 00:04:48.218 "scheduler_set_options", 00:04:48.218 "framework_get_governor", 00:04:48.218 "framework_get_scheduler", 00:04:48.218 "framework_set_scheduler", 00:04:48.218 "framework_get_reactors", 00:04:48.218 "thread_get_io_channels", 00:04:48.218 "thread_get_pollers", 00:04:48.218 "thread_get_stats", 00:04:48.218 "framework_monitor_context_switch", 00:04:48.218 "spdk_kill_instance", 00:04:48.218 "log_enable_timestamps", 00:04:48.218 "log_get_flags", 00:04:48.218 "log_clear_flag", 00:04:48.218 "log_set_flag", 00:04:48.218 "log_get_level", 00:04:48.218 "log_set_level", 00:04:48.218 "log_get_print_level", 00:04:48.218 "log_set_print_level", 00:04:48.218 "framework_enable_cpumask_locks", 00:04:48.218 "framework_disable_cpumask_locks", 00:04:48.218 "framework_wait_init", 00:04:48.218 "framework_start_init", 00:04:48.218 "scsi_get_devices", 00:04:48.218 "bdev_get_histogram", 00:04:48.218 "bdev_enable_histogram", 00:04:48.218 "bdev_set_qos_limit", 00:04:48.218 "bdev_set_qd_sampling_period", 00:04:48.218 "bdev_get_bdevs", 00:04:48.218 "bdev_reset_iostat", 00:04:48.218 "bdev_get_iostat", 00:04:48.218 "bdev_examine", 00:04:48.218 "bdev_wait_for_examine", 00:04:48.218 "bdev_set_options", 00:04:48.218 "accel_get_stats", 00:04:48.218 "accel_set_options", 00:04:48.218 "accel_set_driver", 00:04:48.218 "accel_crypto_key_destroy", 00:04:48.218 "accel_crypto_keys_get", 00:04:48.218 "accel_crypto_key_create", 00:04:48.218 "accel_assign_opc", 00:04:48.218 "accel_get_module_info", 00:04:48.218 "accel_get_opc_assignments", 00:04:48.218 "vmd_rescan", 00:04:48.218 "vmd_remove_device", 00:04:48.218 "vmd_enable", 00:04:48.218 "sock_get_default_impl", 00:04:48.218 "sock_set_default_impl", 00:04:48.218 "sock_impl_set_options", 00:04:48.218 "sock_impl_get_options", 00:04:48.218 "iobuf_get_stats", 00:04:48.218 "iobuf_set_options", 00:04:48.218 "keyring_get_keys", 00:04:48.218 "framework_get_pci_devices", 00:04:48.218 "framework_get_config", 00:04:48.218 "framework_get_subsystems", 00:04:48.218 "fsdev_set_opts", 00:04:48.218 "fsdev_get_opts", 00:04:48.218 "trace_get_info", 00:04:48.218 "trace_get_tpoint_group_mask", 00:04:48.218 "trace_disable_tpoint_group", 00:04:48.218 "trace_enable_tpoint_group", 00:04:48.218 "trace_clear_tpoint_mask", 00:04:48.218 "trace_set_tpoint_mask", 00:04:48.218 "notify_get_notifications", 00:04:48.218 "notify_get_types", 00:04:48.218 "spdk_get_version", 00:04:48.218 "rpc_get_methods" 00:04:48.218 ] 00:04:48.218 07:36:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.218 07:36:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.218 07:36:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.218 07:36:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.219 07:36:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57911 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57911 ']' 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57911 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57911 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.219 killing process with pid 57911 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57911' 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57911 00:04:48.219 07:36:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57911 00:04:49.647 ************************************ 00:04:49.647 END TEST spdkcli_tcp 00:04:49.647 ************************************ 00:04:49.647 00:04:49.647 real 0m2.823s 00:04:49.647 user 0m5.127s 00:04:49.647 sys 0m0.406s 00:04:49.647 07:36:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.647 07:36:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.908 07:36:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.908 07:36:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.908 07:36:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.908 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:04:49.908 ************************************ 00:04:49.908 START TEST dpdk_mem_utility 00:04:49.908 ************************************ 00:04:49.908 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.908 * Looking for test storage... 00:04:49.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.908 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.908 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.908 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.908 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.908 07:36:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:49.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.909 07:36:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.909 --rc genhtml_branch_coverage=1 00:04:49.909 --rc genhtml_function_coverage=1 00:04:49.909 --rc genhtml_legend=1 00:04:49.909 --rc geninfo_all_blocks=1 00:04:49.909 --rc geninfo_unexecuted_blocks=1 00:04:49.909 00:04:49.909 ' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.909 --rc genhtml_branch_coverage=1 00:04:49.909 --rc genhtml_function_coverage=1 00:04:49.909 --rc genhtml_legend=1 00:04:49.909 --rc geninfo_all_blocks=1 00:04:49.909 --rc geninfo_unexecuted_blocks=1 00:04:49.909 00:04:49.909 ' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.909 --rc genhtml_branch_coverage=1 00:04:49.909 --rc genhtml_function_coverage=1 00:04:49.909 --rc genhtml_legend=1 00:04:49.909 --rc geninfo_all_blocks=1 00:04:49.909 --rc geninfo_unexecuted_blocks=1 00:04:49.909 00:04:49.909 ' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.909 --rc genhtml_branch_coverage=1 00:04:49.909 --rc genhtml_function_coverage=1 00:04:49.909 --rc genhtml_legend=1 00:04:49.909 --rc geninfo_all_blocks=1 00:04:49.909 --rc geninfo_unexecuted_blocks=1 00:04:49.909 00:04:49.909 ' 00:04:49.909 07:36:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.909 07:36:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58016 00:04:49.909 07:36:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58016 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58016 ']' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.909 07:36:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.909 07:36:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.909 [2024-11-29 07:36:39.850121] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:50.167 [2024-11-29 07:36:39.850755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:04:50.168 [2024-11-29 07:36:40.006874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.168 [2024-11-29 07:36:40.083789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.734 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.734 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:50.734 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.734 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.734 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.734 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.734 { 00:04:50.734 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.734 } 00:04:50.734 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.734 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.993 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:50.993 1 heaps totaling size 824.000000 MiB 00:04:50.993 size: 824.000000 MiB heap id: 0 00:04:50.993 end heaps---------- 00:04:50.993 9 mempools totaling size 603.782043 MiB 00:04:50.993 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.993 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.993 size: 100.555481 MiB name: bdev_io_58016 00:04:50.993 size: 50.003479 MiB name: msgpool_58016 00:04:50.993 size: 36.509338 MiB name: fsdev_io_58016 00:04:50.993 size: 21.763794 MiB name: PDU_Pool 00:04:50.993 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.993 size: 4.133484 MiB name: evtpool_58016 00:04:50.993 size: 0.026123 MiB name: Session_Pool 00:04:50.993 end mempools------- 00:04:50.993 6 memzones totaling size 4.142822 MiB 00:04:50.993 size: 1.000366 MiB name: RG_ring_0_58016 00:04:50.993 size: 1.000366 MiB name: RG_ring_1_58016 00:04:50.993 size: 1.000366 MiB name: RG_ring_4_58016 00:04:50.993 size: 1.000366 MiB name: RG_ring_5_58016 00:04:50.993 size: 0.125366 MiB name: RG_ring_2_58016 00:04:50.993 size: 0.015991 MiB name: RG_ring_3_58016 00:04:50.993 end memzones------- 00:04:50.993 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.993 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:04:50.993 list of free elements. size: 16.779907 MiB 00:04:50.993 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:50.993 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:50.993 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:50.993 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:50.993 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:50.993 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:50.993 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:50.993 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:50.993 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:50.993 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:50.993 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:50.993 element at address: 0x20001b400000 with size: 0.560974 MiB 00:04:50.993 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:50.993 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:50.993 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:50.993 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:50.993 element at address: 0x200028800000 with size: 0.390930 MiB 00:04:50.993 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:50.993 list of standard malloc elements. size: 199.289185 MiB 00:04:50.993 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:50.993 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:50.993 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:50.993 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:50.993 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:50.993 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:50.993 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:50.993 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:50.993 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:50.993 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:50.993 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:50.993 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:50.993 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:50.993 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:50.993 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:50.993 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:50.994 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:50.994 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:50.995 element at address: 0x200028864140 with size: 0.000244 MiB 00:04:50.995 element at address: 0x200028864240 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886af00 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:50.995 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:50.995 list of memzone associated elements. size: 607.930908 MiB 00:04:50.995 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:50.995 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.995 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:50.995 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.995 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:50.995 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58016_0 00:04:50.995 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:50.995 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58016_0 00:04:50.995 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:50.995 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58016_0 00:04:50.995 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:50.995 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.995 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:50.995 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.995 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:50.995 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58016_0 00:04:50.995 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:50.995 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58016 00:04:50.995 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:50.995 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58016 00:04:50.995 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:50.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.995 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:50.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.995 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:50.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.995 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:50.995 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.995 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:50.995 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58016 00:04:50.996 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:50.996 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58016 00:04:50.996 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:50.996 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58016 00:04:50.996 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:50.996 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58016 00:04:50.996 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:50.996 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58016 00:04:50.996 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:50.996 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58016 00:04:50.996 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:50.996 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.996 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:50.996 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.996 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:50.996 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.996 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:50.996 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58016 00:04:50.996 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:50.996 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58016 00:04:50.996 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:50.996 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.996 element at address: 0x200028864340 with size: 0.023804 MiB 00:04:50.996 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.996 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:50.996 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58016 00:04:50.996 element at address: 0x20002886a4c0 with size: 0.002502 MiB 00:04:50.996 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.996 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:50.996 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58016 00:04:50.996 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:50.996 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58016 00:04:50.996 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:50.996 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58016 00:04:50.996 element at address: 0x20002886b000 with size: 0.000366 MiB 00:04:50.996 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.996 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.996 07:36:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58016 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58016 ']' 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58016 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58016 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58016' 00:04:50.996 killing process with pid 58016 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58016 00:04:50.996 07:36:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58016 00:04:52.372 00:04:52.372 real 0m2.363s 00:04:52.372 user 0m2.391s 00:04:52.372 sys 0m0.369s 00:04:52.372 07:36:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.372 ************************************ 00:04:52.372 07:36:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.372 END TEST dpdk_mem_utility 00:04:52.372 ************************************ 00:04:52.372 07:36:42 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.372 07:36:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.372 07:36:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.372 07:36:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.372 ************************************ 00:04:52.372 START TEST event 00:04:52.372 ************************************ 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.372 * Looking for test storage... 00:04:52.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.372 07:36:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.372 07:36:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.372 07:36:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.372 07:36:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.372 07:36:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.372 07:36:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.372 07:36:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.372 07:36:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.372 07:36:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.372 07:36:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.372 07:36:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.372 07:36:42 event -- scripts/common.sh@344 -- # case "$op" in 00:04:52.372 07:36:42 event -- scripts/common.sh@345 -- # : 1 00:04:52.372 07:36:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.372 07:36:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.372 07:36:42 event -- scripts/common.sh@365 -- # decimal 1 00:04:52.372 07:36:42 event -- scripts/common.sh@353 -- # local d=1 00:04:52.372 07:36:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.372 07:36:42 event -- scripts/common.sh@355 -- # echo 1 00:04:52.372 07:36:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.372 07:36:42 event -- scripts/common.sh@366 -- # decimal 2 00:04:52.372 07:36:42 event -- scripts/common.sh@353 -- # local d=2 00:04:52.372 07:36:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.372 07:36:42 event -- scripts/common.sh@355 -- # echo 2 00:04:52.372 07:36:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.372 07:36:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.372 07:36:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.372 07:36:42 event -- scripts/common.sh@368 -- # return 0 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.372 --rc genhtml_branch_coverage=1 00:04:52.372 --rc genhtml_function_coverage=1 00:04:52.372 --rc genhtml_legend=1 00:04:52.372 --rc geninfo_all_blocks=1 00:04:52.372 --rc geninfo_unexecuted_blocks=1 00:04:52.372 00:04:52.372 ' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.372 --rc genhtml_branch_coverage=1 00:04:52.372 --rc genhtml_function_coverage=1 00:04:52.372 --rc genhtml_legend=1 00:04:52.372 --rc geninfo_all_blocks=1 00:04:52.372 --rc geninfo_unexecuted_blocks=1 00:04:52.372 00:04:52.372 ' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.372 --rc genhtml_branch_coverage=1 00:04:52.372 --rc genhtml_function_coverage=1 00:04:52.372 --rc genhtml_legend=1 00:04:52.372 --rc geninfo_all_blocks=1 00:04:52.372 --rc geninfo_unexecuted_blocks=1 00:04:52.372 00:04:52.372 ' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.372 --rc genhtml_branch_coverage=1 00:04:52.372 --rc genhtml_function_coverage=1 00:04:52.372 --rc genhtml_legend=1 00:04:52.372 --rc geninfo_all_blocks=1 00:04:52.372 --rc geninfo_unexecuted_blocks=1 00:04:52.372 00:04:52.372 ' 00:04:52.372 07:36:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:52.372 07:36:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.372 07:36:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:52.372 07:36:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.372 07:36:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.372 ************************************ 00:04:52.372 START TEST event_perf 00:04:52.372 ************************************ 00:04:52.372 07:36:42 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.372 Running I/O for 1 seconds...[2024-11-29 07:36:42.228226] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:52.372 [2024-11-29 07:36:42.228331] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58108 ] 00:04:52.631 [2024-11-29 07:36:42.383509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.631 [2024-11-29 07:36:42.463946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.631 [2024-11-29 07:36:42.463989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.631 [2024-11-29 07:36:42.464194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.631 Running I/O for 1 seconds...[2024-11-29 07:36:42.464302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.004 00:04:54.004 lcore 0: 205696 00:04:54.004 lcore 1: 205700 00:04:54.004 lcore 2: 205697 00:04:54.004 lcore 3: 205697 00:04:54.004 done. 00:04:54.004 00:04:54.004 real 0m1.396s 00:04:54.004 user 0m4.209s 00:04:54.004 sys 0m0.070s 00:04:54.004 07:36:43 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.004 ************************************ 00:04:54.004 END TEST event_perf 00:04:54.004 ************************************ 00:04:54.004 07:36:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 07:36:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.004 07:36:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:54.004 07:36:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.004 07:36:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 ************************************ 00:04:54.004 START TEST event_reactor 00:04:54.004 ************************************ 00:04:54.004 07:36:43 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.004 [2024-11-29 07:36:43.679043] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:54.004 [2024-11-29 07:36:43.679564] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58147 ] 00:04:54.004 [2024-11-29 07:36:43.834421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.004 [2024-11-29 07:36:43.911273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.377 test_start 00:04:55.377 oneshot 00:04:55.377 tick 100 00:04:55.377 tick 100 00:04:55.377 tick 250 00:04:55.377 tick 100 00:04:55.377 tick 100 00:04:55.377 tick 250 00:04:55.377 tick 100 00:04:55.377 tick 500 00:04:55.377 tick 100 00:04:55.377 tick 100 00:04:55.377 tick 250 00:04:55.377 tick 100 00:04:55.377 tick 100 00:04:55.377 test_end 00:04:55.377 ************************************ 00:04:55.377 00:04:55.377 real 0m1.380s 00:04:55.377 user 0m1.207s 00:04:55.377 sys 0m0.065s 00:04:55.377 07:36:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.377 07:36:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:55.377 END TEST event_reactor 00:04:55.377 ************************************ 00:04:55.377 07:36:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.377 07:36:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.377 07:36:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.377 07:36:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.377 ************************************ 00:04:55.377 START TEST event_reactor_perf 00:04:55.377 ************************************ 00:04:55.377 07:36:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.377 [2024-11-29 07:36:45.115786] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:55.378 [2024-11-29 07:36:45.116001] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58184 ] 00:04:55.378 [2024-11-29 07:36:45.276136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.637 [2024-11-29 07:36:45.370115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.576 test_start 00:04:56.576 test_end 00:04:56.576 Performance: 324989 events per second 00:04:56.576 ************************************ 00:04:56.576 END TEST event_reactor_perf 00:04:56.576 ************************************ 00:04:56.576 00:04:56.576 real 0m1.397s 00:04:56.576 user 0m1.227s 00:04:56.576 sys 0m0.063s 00:04:56.576 07:36:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.576 07:36:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.835 07:36:46 event -- event/event.sh@49 -- # uname -s 00:04:56.835 07:36:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.835 07:36:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:56.835 07:36:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.835 07:36:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.835 07:36:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.835 ************************************ 00:04:56.835 START TEST event_scheduler 00:04:56.835 ************************************ 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:56.835 * Looking for test storage... 00:04:56.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.835 07:36:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.835 --rc genhtml_branch_coverage=1 00:04:56.835 --rc genhtml_function_coverage=1 00:04:56.835 --rc genhtml_legend=1 00:04:56.835 --rc geninfo_all_blocks=1 00:04:56.835 --rc geninfo_unexecuted_blocks=1 00:04:56.835 00:04:56.835 ' 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.835 --rc genhtml_branch_coverage=1 00:04:56.835 --rc genhtml_function_coverage=1 00:04:56.835 --rc genhtml_legend=1 00:04:56.835 --rc geninfo_all_blocks=1 00:04:56.835 --rc geninfo_unexecuted_blocks=1 00:04:56.835 00:04:56.835 ' 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.835 --rc genhtml_branch_coverage=1 00:04:56.835 --rc genhtml_function_coverage=1 00:04:56.835 --rc genhtml_legend=1 00:04:56.835 --rc geninfo_all_blocks=1 00:04:56.835 --rc geninfo_unexecuted_blocks=1 00:04:56.835 00:04:56.835 ' 00:04:56.835 07:36:46 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.836 --rc genhtml_branch_coverage=1 00:04:56.836 --rc genhtml_function_coverage=1 00:04:56.836 --rc genhtml_legend=1 00:04:56.836 --rc geninfo_all_blocks=1 00:04:56.836 --rc geninfo_unexecuted_blocks=1 00:04:56.836 00:04:56.836 ' 00:04:56.836 07:36:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.836 07:36:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58254 00:04:56.836 07:36:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.836 07:36:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58254 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58254 ']' 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.836 07:36:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.836 07:36:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.836 [2024-11-29 07:36:46.742116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:56.836 [2024-11-29 07:36:46.743130] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:04:57.093 [2024-11-29 07:36:46.895142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.093 [2024-11-29 07:36:46.995910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.094 [2024-11-29 07:36:46.996550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.094 [2024-11-29 07:36:46.996784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.094 [2024-11-29 07:36:46.996865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:57.662 07:36:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.662 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.662 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.662 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.662 POWER: Cannot set governor of lcore 0 to performance 00:04:57.662 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.662 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.662 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:57.662 POWER: Cannot set governor of lcore 0 to userspace 00:04:57.662 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:57.662 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:57.662 POWER: Unable to set Power Management Environment for lcore 0 00:04:57.662 [2024-11-29 07:36:47.538540] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:57.662 [2024-11-29 07:36:47.538560] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:57.662 [2024-11-29 07:36:47.538569] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:57.662 [2024-11-29 07:36:47.538593] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:57.662 [2024-11-29 07:36:47.538601] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:57.662 [2024-11-29 07:36:47.538609] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.662 07:36:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.662 07:36:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 [2024-11-29 07:36:47.763325] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:57.922 07:36:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:57.922 07:36:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.922 07:36:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 ************************************ 00:04:57.922 START TEST scheduler_create_thread 00:04:57.922 ************************************ 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 2 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 3 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 4 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 5 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 6 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 7 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 8 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.922 9 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.922 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.923 10 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.923 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.184 07:36:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.181 ************************************ 00:04:59.181 END TEST scheduler_create_thread 00:04:59.181 ************************************ 00:04:59.181 07:36:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.181 00:04:59.181 real 0m1.171s 00:04:59.181 user 0m0.014s 00:04:59.181 sys 0m0.004s 00:04:59.181 07:36:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.181 07:36:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.181 07:36:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:59.181 07:36:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58254 00:04:59.181 07:36:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58254 ']' 00:04:59.181 07:36:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58254 00:04:59.181 07:36:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:59.181 07:36:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.181 07:36:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58254 00:04:59.181 killing process with pid 58254 00:04:59.181 07:36:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:59.181 07:36:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:59.181 07:36:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58254' 00:04:59.181 07:36:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58254 00:04:59.181 07:36:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58254 00:04:59.747 [2024-11-29 07:36:49.427589] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.344 00:05:00.344 real 0m3.457s 00:05:00.344 user 0m5.553s 00:05:00.344 sys 0m0.308s 00:05:00.344 07:36:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.344 07:36:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.344 ************************************ 00:05:00.344 END TEST event_scheduler 00:05:00.344 ************************************ 00:05:00.344 07:36:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.344 07:36:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.344 07:36:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.344 07:36:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.344 07:36:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.344 ************************************ 00:05:00.344 START TEST app_repeat 00:05:00.344 ************************************ 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.344 Process app_repeat pid: 58344 00:05:00.344 spdk_app_start Round 0 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58344 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58344' 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.344 07:36:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58344 /var/tmp/spdk-nbd.sock 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58344 ']' 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.344 07:36:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.344 [2024-11-29 07:36:50.086373] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:00.344 [2024-11-29 07:36:50.086592] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58344 ] 00:05:00.344 [2024-11-29 07:36:50.236249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.605 [2024-11-29 07:36:50.334419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.605 [2024-11-29 07:36:50.334432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.176 07:36:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.176 07:36:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.176 07:36:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.437 Malloc0 00:05:01.437 07:36:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.698 Malloc1 00:05:01.698 07:36:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.698 07:36:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.958 /dev/nbd0 00:05:01.958 07:36:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.958 07:36:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.958 1+0 records in 00:05:01.958 1+0 records out 00:05:01.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262569 s, 15.6 MB/s 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.958 07:36:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.958 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.958 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.958 07:36:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.217 /dev/nbd1 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.217 1+0 records in 00:05:02.217 1+0 records out 00:05:02.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224886 s, 18.2 MB/s 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.217 07:36:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.217 07:36:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.217 07:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd0", 00:05:02.217 "bdev_name": "Malloc0" 00:05:02.217 }, 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd1", 00:05:02.217 "bdev_name": "Malloc1" 00:05:02.217 } 00:05:02.217 ]' 00:05:02.217 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd0", 00:05:02.217 "bdev_name": "Malloc0" 00:05:02.217 }, 00:05:02.217 { 00:05:02.217 "nbd_device": "/dev/nbd1", 00:05:02.217 "bdev_name": "Malloc1" 00:05:02.217 } 00:05:02.217 ]' 00:05:02.217 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.476 /dev/nbd1' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.476 /dev/nbd1' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.476 256+0 records in 00:05:02.476 256+0 records out 00:05:02.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00824822 s, 127 MB/s 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.476 256+0 records in 00:05:02.476 256+0 records out 00:05:02.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199826 s, 52.5 MB/s 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.476 256+0 records in 00:05:02.476 256+0 records out 00:05:02.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171612 s, 61.1 MB/s 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.476 07:36:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.734 07:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.992 07:36:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.992 07:36:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.559 07:36:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.126 [2024-11-29 07:36:53.842729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.126 [2024-11-29 07:36:53.912901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.126 [2024-11-29 07:36:53.912914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.126 [2024-11-29 07:36:54.012975] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.126 [2024-11-29 07:36:54.013032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.657 spdk_app_start Round 1 00:05:06.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.657 07:36:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.657 07:36:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.657 07:36:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58344 /var/tmp/spdk-nbd.sock 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58344 ']' 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.657 07:36:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.657 07:36:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.915 Malloc0 00:05:06.915 07:36:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.174 Malloc1 00:05:07.174 07:36:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.174 07:36:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.175 07:36:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.175 /dev/nbd0 00:05:07.175 07:36:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.175 07:36:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.175 1+0 records in 00:05:07.175 1+0 records out 00:05:07.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229228 s, 17.9 MB/s 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.175 07:36:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.175 07:36:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.175 07:36:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.175 07:36:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.434 /dev/nbd1 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.434 1+0 records in 00:05:07.434 1+0 records out 00:05:07.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185246 s, 22.1 MB/s 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.434 07:36:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.434 07:36:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.692 { 00:05:07.692 "nbd_device": "/dev/nbd0", 00:05:07.692 "bdev_name": "Malloc0" 00:05:07.692 }, 00:05:07.692 { 00:05:07.692 "nbd_device": "/dev/nbd1", 00:05:07.692 "bdev_name": "Malloc1" 00:05:07.692 } 00:05:07.692 ]' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.692 { 00:05:07.692 "nbd_device": "/dev/nbd0", 00:05:07.692 "bdev_name": "Malloc0" 00:05:07.692 }, 00:05:07.692 { 00:05:07.692 "nbd_device": "/dev/nbd1", 00:05:07.692 "bdev_name": "Malloc1" 00:05:07.692 } 00:05:07.692 ]' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.692 /dev/nbd1' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.692 /dev/nbd1' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.692 256+0 records in 00:05:07.692 256+0 records out 00:05:07.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599081 s, 175 MB/s 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.692 256+0 records in 00:05:07.692 256+0 records out 00:05:07.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146129 s, 71.8 MB/s 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.692 07:36:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.951 256+0 records in 00:05:07.951 256+0 records out 00:05:07.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146173 s, 71.7 MB/s 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.951 07:36:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.209 07:36:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.468 07:36:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.468 07:36:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.727 07:36:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.293 [2024-11-29 07:36:59.138942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.293 [2024-11-29 07:36:59.208477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.293 [2024-11-29 07:36:59.208479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.552 [2024-11-29 07:36:59.309171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.552 [2024-11-29 07:36:59.309214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.083 spdk_app_start Round 2 00:05:12.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.083 07:37:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.083 07:37:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.083 07:37:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58344 /var/tmp/spdk-nbd.sock 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58344 ']' 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.083 07:37:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.083 07:37:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.083 Malloc0 00:05:12.083 07:37:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.339 Malloc1 00:05:12.339 07:37:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.340 07:37:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.597 /dev/nbd0 00:05:12.598 07:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.598 07:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.598 1+0 records in 00:05:12.598 1+0 records out 00:05:12.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501457 s, 8.2 MB/s 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.598 07:37:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.598 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.598 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.598 07:37:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.855 /dev/nbd1 00:05:12.855 07:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.855 07:37:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.855 07:37:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.855 1+0 records in 00:05:12.855 1+0 records out 00:05:12.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248576 s, 16.5 MB/s 00:05:12.856 07:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.856 07:37:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.856 07:37:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.856 07:37:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.856 07:37:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.856 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.856 07:37:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.856 07:37:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.856 07:37:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.856 07:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.114 { 00:05:13.114 "nbd_device": "/dev/nbd0", 00:05:13.114 "bdev_name": "Malloc0" 00:05:13.114 }, 00:05:13.114 { 00:05:13.114 "nbd_device": "/dev/nbd1", 00:05:13.114 "bdev_name": "Malloc1" 00:05:13.114 } 00:05:13.114 ]' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.114 { 00:05:13.114 "nbd_device": "/dev/nbd0", 00:05:13.114 "bdev_name": "Malloc0" 00:05:13.114 }, 00:05:13.114 { 00:05:13.114 "nbd_device": "/dev/nbd1", 00:05:13.114 "bdev_name": "Malloc1" 00:05:13.114 } 00:05:13.114 ]' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.114 /dev/nbd1' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.114 /dev/nbd1' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.114 256+0 records in 00:05:13.114 256+0 records out 00:05:13.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596297 s, 176 MB/s 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.114 256+0 records in 00:05:13.114 256+0 records out 00:05:13.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152748 s, 68.6 MB/s 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.114 256+0 records in 00:05:13.114 256+0 records out 00:05:13.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170913 s, 61.4 MB/s 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.114 07:37:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.114 07:37:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.372 07:37:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.630 07:37:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.888 07:37:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.888 07:37:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.146 07:37:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.711 [2024-11-29 07:37:04.507187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.711 [2024-11-29 07:37:04.575415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.711 [2024-11-29 07:37:04.575418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.969 [2024-11-29 07:37:04.671412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.969 [2024-11-29 07:37:04.671477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.496 07:37:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58344 /var/tmp/spdk-nbd.sock 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58344 ']' 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.496 07:37:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.496 07:37:07 event.app_repeat -- event/event.sh@39 -- # killprocess 58344 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58344 ']' 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58344 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58344 00:05:17.496 killing process with pid 58344 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58344' 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58344 00:05:17.496 07:37:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58344 00:05:17.754 spdk_app_start is called in Round 0. 00:05:17.754 Shutdown signal received, stop current app iteration 00:05:17.754 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:17.754 spdk_app_start is called in Round 1. 00:05:17.754 Shutdown signal received, stop current app iteration 00:05:17.754 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:17.754 spdk_app_start is called in Round 2. 00:05:17.754 Shutdown signal received, stop current app iteration 00:05:17.754 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:17.754 spdk_app_start is called in Round 3. 00:05:17.754 Shutdown signal received, stop current app iteration 00:05:18.013 ************************************ 00:05:18.013 END TEST app_repeat 00:05:18.013 ************************************ 00:05:18.013 07:37:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.013 07:37:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.013 00:05:18.013 real 0m17.668s 00:05:18.013 user 0m38.815s 00:05:18.013 sys 0m2.012s 00:05:18.013 07:37:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.013 07:37:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.013 07:37:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.013 07:37:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.013 07:37:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.013 07:37:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.013 07:37:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.013 ************************************ 00:05:18.013 START TEST cpu_locks 00:05:18.013 ************************************ 00:05:18.013 07:37:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.013 * Looking for test storage... 00:05:18.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.013 07:37:07 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.013 07:37:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.013 07:37:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.013 07:37:07 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.013 07:37:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.014 07:37:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.014 --rc genhtml_branch_coverage=1 00:05:18.014 --rc genhtml_function_coverage=1 00:05:18.014 --rc genhtml_legend=1 00:05:18.014 --rc geninfo_all_blocks=1 00:05:18.014 --rc geninfo_unexecuted_blocks=1 00:05:18.014 00:05:18.014 ' 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.014 --rc genhtml_branch_coverage=1 00:05:18.014 --rc genhtml_function_coverage=1 00:05:18.014 --rc genhtml_legend=1 00:05:18.014 --rc geninfo_all_blocks=1 00:05:18.014 --rc geninfo_unexecuted_blocks=1 00:05:18.014 00:05:18.014 ' 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.014 --rc genhtml_branch_coverage=1 00:05:18.014 --rc genhtml_function_coverage=1 00:05:18.014 --rc genhtml_legend=1 00:05:18.014 --rc geninfo_all_blocks=1 00:05:18.014 --rc geninfo_unexecuted_blocks=1 00:05:18.014 00:05:18.014 ' 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.014 --rc genhtml_branch_coverage=1 00:05:18.014 --rc genhtml_function_coverage=1 00:05:18.014 --rc genhtml_legend=1 00:05:18.014 --rc geninfo_all_blocks=1 00:05:18.014 --rc geninfo_unexecuted_blocks=1 00:05:18.014 00:05:18.014 ' 00:05:18.014 07:37:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.014 07:37:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.014 07:37:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.014 07:37:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.014 07:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.014 ************************************ 00:05:18.014 START TEST default_locks 00:05:18.014 ************************************ 00:05:18.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58769 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58769 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58769 ']' 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.014 07:37:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.273 [2024-11-29 07:37:07.992619] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:18.273 [2024-11-29 07:37:07.992866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58769 ] 00:05:18.273 [2024-11-29 07:37:08.147164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.532 [2024-11-29 07:37:08.225903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.098 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.098 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58769 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58769 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58769 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58769 ']' 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58769 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.099 07:37:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58769 00:05:19.099 killing process with pid 58769 00:05:19.099 07:37:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.099 07:37:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.099 07:37:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58769' 00:05:19.099 07:37:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58769 00:05:19.099 07:37:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58769 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58769 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58769 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.475 ERROR: process (pid: 58769) is no longer running 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58769 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58769 ']' 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.475 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58769) - No such process 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.475 00:05:20.475 real 0m2.243s 00:05:20.475 user 0m2.216s 00:05:20.475 sys 0m0.429s 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.475 07:37:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.475 ************************************ 00:05:20.475 END TEST default_locks 00:05:20.475 ************************************ 00:05:20.475 07:37:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:20.475 07:37:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.475 07:37:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.475 07:37:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.475 ************************************ 00:05:20.475 START TEST default_locks_via_rpc 00:05:20.475 ************************************ 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58822 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58822 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58822 ']' 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.475 07:37:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.475 [2024-11-29 07:37:10.291376] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:20.475 [2024-11-29 07:37:10.291565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58822 ] 00:05:20.734 [2024-11-29 07:37:10.442042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.734 [2024-11-29 07:37:10.517696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58822 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58822 00:05:21.301 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58822 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58822 ']' 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58822 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58822 00:05:21.559 killing process with pid 58822 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58822' 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58822 00:05:21.559 07:37:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58822 00:05:22.938 ************************************ 00:05:22.938 END TEST default_locks_via_rpc 00:05:22.938 ************************************ 00:05:22.938 00:05:22.938 real 0m2.238s 00:05:22.938 user 0m2.234s 00:05:22.938 sys 0m0.390s 00:05:22.938 07:37:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.938 07:37:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.938 07:37:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.938 07:37:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.938 07:37:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.938 07:37:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.938 ************************************ 00:05:22.938 START TEST non_locking_app_on_locked_coremask 00:05:22.938 ************************************ 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58885 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58885 /var/tmp/spdk.sock 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58885 ']' 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.938 07:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.938 [2024-11-29 07:37:12.601382] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:22.938 [2024-11-29 07:37:12.601647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58885 ] 00:05:22.938 [2024-11-29 07:37:12.762934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.938 [2024-11-29 07:37:12.858857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58895 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58895 /var/tmp/spdk2.sock 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58895 ']' 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 07:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.770 [2024-11-29 07:37:13.518295] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:23.770 [2024-11-29 07:37:13.518408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:05:23.770 [2024-11-29 07:37:13.692508] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.770 [2024-11-29 07:37:13.692555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.028 [2024-11-29 07:37:13.885342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.403 07:37:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.403 07:37:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.403 07:37:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58885 00:05:25.403 07:37:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58885 00:05:25.404 07:37:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58885 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58885 ']' 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58885 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58885 00:05:25.404 killing process with pid 58885 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58885' 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58885 00:05:25.404 07:37:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58885 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58895 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58895 ']' 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58895 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58895 00:05:28.011 killing process with pid 58895 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58895' 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58895 00:05:28.011 07:37:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58895 00:05:28.948 ************************************ 00:05:28.948 END TEST non_locking_app_on_locked_coremask 00:05:28.948 ************************************ 00:05:28.948 00:05:28.948 real 0m6.300s 00:05:28.948 user 0m6.531s 00:05:28.948 sys 0m0.824s 00:05:28.948 07:37:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.948 07:37:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.948 07:37:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.948 07:37:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.948 07:37:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.948 07:37:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.948 ************************************ 00:05:28.948 START TEST locking_app_on_unlocked_coremask 00:05:28.948 ************************************ 00:05:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58992 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58992 /var/tmp/spdk.sock 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58992 ']' 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.948 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.949 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.949 07:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.207 [2024-11-29 07:37:18.959413] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:29.207 [2024-11-29 07:37:18.959551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:05:29.207 [2024-11-29 07:37:19.117931] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.207 [2024-11-29 07:37:19.117965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.465 [2024-11-29 07:37:19.194130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59008 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59008 /var/tmp/spdk2.sock 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59008 ']' 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.032 07:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.032 [2024-11-29 07:37:19.862825] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:30.032 [2024-11-29 07:37:19.862940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:05:30.291 [2024-11-29 07:37:20.026119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.291 [2024-11-29 07:37:20.185187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.227 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.227 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.227 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59008 00:05:31.227 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59008 00:05:31.227 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58992 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58992 ']' 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58992 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58992 00:05:31.486 killing process with pid 58992 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58992' 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58992 00:05:31.486 07:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58992 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59008 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59008 ']' 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59008 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59008 00:05:34.015 killing process with pid 59008 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59008' 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59008 00:05:34.015 07:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59008 00:05:35.393 ************************************ 00:05:35.393 END TEST locking_app_on_unlocked_coremask 00:05:35.393 ************************************ 00:05:35.393 00:05:35.393 real 0m6.086s 00:05:35.393 user 0m6.355s 00:05:35.393 sys 0m0.768s 00:05:35.393 07:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.393 07:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.393 07:37:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.393 07:37:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.393 07:37:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.393 07:37:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.393 ************************************ 00:05:35.393 START TEST locking_app_on_locked_coremask 00:05:35.393 ************************************ 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59099 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59099 /var/tmp/spdk.sock 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59099 ']' 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.393 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.393 [2024-11-29 07:37:25.108695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:35.393 [2024-11-29 07:37:25.108817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:05:35.393 [2024-11-29 07:37:25.265185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.712 [2024-11-29 07:37:25.341571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59115 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59115 /var/tmp/spdk2.sock 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59115 /var/tmp/spdk2.sock 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59115 /var/tmp/spdk2.sock 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59115 ']' 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.278 07:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.278 [2024-11-29 07:37:25.991892] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:36.278 [2024-11-29 07:37:25.992007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:05:36.278 [2024-11-29 07:37:26.152533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59099 has claimed it. 00:05:36.278 [2024-11-29 07:37:26.152580] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.844 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59115) - No such process 00:05:36.844 ERROR: process (pid: 59115) is no longer running 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59099 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59099 00:05:36.844 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59099 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59099 ']' 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59099 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59099 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.103 killing process with pid 59099 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59099' 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59099 00:05:37.103 07:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59099 00:05:38.479 00:05:38.479 real 0m2.992s 00:05:38.479 user 0m3.222s 00:05:38.479 sys 0m0.500s 00:05:38.479 ************************************ 00:05:38.479 END TEST locking_app_on_locked_coremask 00:05:38.479 ************************************ 00:05:38.479 07:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.479 07:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.479 07:37:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:38.479 07:37:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.479 07:37:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.479 07:37:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.479 ************************************ 00:05:38.479 START TEST locking_overlapped_coremask 00:05:38.479 ************************************ 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59168 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59168 /var/tmp/spdk.sock 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59168 ']' 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:38.479 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.479 [2024-11-29 07:37:28.149944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:38.479 [2024-11-29 07:37:28.150064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:38.479 [2024-11-29 07:37:28.307056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.479 [2024-11-29 07:37:28.385639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.479 [2024-11-29 07:37:28.386074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.479 [2024-11-29 07:37:28.386102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59186 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59186 /var/tmp/spdk2.sock 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59186 /var/tmp/spdk2.sock 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59186 /var/tmp/spdk2.sock 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59186 ']' 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.046 07:37:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.304 [2024-11-29 07:37:29.049515] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:39.304 [2024-11-29 07:37:29.049913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:05:39.304 [2024-11-29 07:37:29.222484] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59168 has claimed it. 00:05:39.304 [2024-11-29 07:37:29.222540] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59186) - No such process 00:05:39.870 ERROR: process (pid: 59186) is no longer running 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59168 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59168 ']' 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59168 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59168 00:05:39.870 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.870 killing process with pid 59168 00:05:39.871 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.871 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59168' 00:05:39.871 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59168 00:05:39.871 07:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59168 00:05:41.244 00:05:41.244 real 0m2.819s 00:05:41.244 user 0m7.716s 00:05:41.244 sys 0m0.396s 00:05:41.244 ************************************ 00:05:41.244 END TEST locking_overlapped_coremask 00:05:41.244 ************************************ 00:05:41.244 07:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.245 07:37:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:41.245 07:37:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.245 07:37:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.245 07:37:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.245 ************************************ 00:05:41.245 START TEST locking_overlapped_coremask_via_rpc 00:05:41.245 ************************************ 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59238 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.245 07:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:41.245 [2024-11-29 07:37:31.022315] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:41.245 [2024-11-29 07:37:31.022432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:05:41.245 [2024-11-29 07:37:31.176870] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.245 [2024-11-29 07:37:31.176905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.503 [2024-11-29 07:37:31.255533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.503 [2024-11-29 07:37:31.255713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.503 [2024-11-29 07:37:31.255790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59252 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59252 /var/tmp/spdk2.sock 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59252 ']' 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.069 07:37:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.069 [2024-11-29 07:37:31.918072] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:42.069 [2024-11-29 07:37:31.918185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59252 ] 00:05:42.328 [2024-11-29 07:37:32.080804] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.328 [2024-11-29 07:37:32.080842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.328 [2024-11-29 07:37:32.240202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.328 [2024-11-29 07:37:32.240233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.328 [2024-11-29 07:37:32.240256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 [2024-11-29 07:37:33.171549] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59238 has claimed it. 00:05:43.262 request: 00:05:43.262 { 00:05:43.262 "method": "framework_enable_cpumask_locks", 00:05:43.262 "req_id": 1 00:05:43.262 } 00:05:43.262 Got JSON-RPC error response 00:05:43.262 response: 00:05:43.262 { 00:05:43.262 "code": -32603, 00:05:43.262 "message": "Failed to claim CPU core: 2" 00:05:43.262 } 00:05:43.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.262 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59252 /var/tmp/spdk2.sock 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59252 ']' 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.521 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 ************************************ 00:05:43.779 END TEST locking_overlapped_coremask_via_rpc 00:05:43.779 ************************************ 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.779 00:05:43.779 real 0m2.649s 00:05:43.779 user 0m1.051s 00:05:43.779 sys 0m0.127s 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.779 07:37:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 07:37:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:43.779 07:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59238 ]] 00:05:43.779 07:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59238 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59238 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59238 00:05:43.779 killing process with pid 59238 00:05:43.779 07:37:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.780 07:37:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.780 07:37:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59238' 00:05:43.780 07:37:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59238 00:05:43.780 07:37:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59238 00:05:45.155 07:37:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59252 ]] 00:05:45.155 07:37:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59252 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59252 ']' 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59252 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59252 00:05:45.155 killing process with pid 59252 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59252' 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59252 00:05:45.155 07:37:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59252 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59238 ]] 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59238 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59238 00:05:46.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59238) - No such process 00:05:46.532 Process with pid 59238 is not found 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59238 is not found' 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59252 ]] 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59252 00:05:46.532 Process with pid 59252 is not found 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59252 ']' 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59252 00:05:46.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59252) - No such process 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59252 is not found' 00:05:46.532 07:37:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.532 00:05:46.532 real 0m28.283s 00:05:46.532 user 0m48.366s 00:05:46.532 sys 0m4.187s 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.532 ************************************ 00:05:46.532 END TEST cpu_locks 00:05:46.532 ************************************ 00:05:46.532 07:37:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 ************************************ 00:05:46.532 END TEST event 00:05:46.532 ************************************ 00:05:46.532 00:05:46.532 real 0m54.044s 00:05:46.532 user 1m39.545s 00:05:46.532 sys 0m6.921s 00:05:46.532 07:37:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.532 07:37:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 07:37:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:46.532 07:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.532 07:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.532 07:37:36 -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 ************************************ 00:05:46.532 START TEST thread 00:05:46.532 ************************************ 00:05:46.532 07:37:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:46.532 * Looking for test storage... 00:05:46.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:46.532 07:37:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.532 07:37:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.532 07:37:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.532 07:37:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.532 07:37:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.532 07:37:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.532 07:37:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.532 07:37:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.532 07:37:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.532 07:37:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.532 07:37:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.532 07:37:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.532 07:37:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.532 07:37:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.532 07:37:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.532 07:37:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:46.532 07:37:36 thread -- scripts/common.sh@345 -- # : 1 00:05:46.532 07:37:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.532 07:37:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.532 07:37:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:46.532 07:37:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:46.532 07:37:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.532 07:37:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:46.532 07:37:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.532 07:37:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:46.532 07:37:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:46.532 07:37:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.532 07:37:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:46.532 07:37:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.532 07:37:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.533 07:37:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.533 07:37:36 thread -- scripts/common.sh@368 -- # return 0 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.533 --rc genhtml_branch_coverage=1 00:05:46.533 --rc genhtml_function_coverage=1 00:05:46.533 --rc genhtml_legend=1 00:05:46.533 --rc geninfo_all_blocks=1 00:05:46.533 --rc geninfo_unexecuted_blocks=1 00:05:46.533 00:05:46.533 ' 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.533 --rc genhtml_branch_coverage=1 00:05:46.533 --rc genhtml_function_coverage=1 00:05:46.533 --rc genhtml_legend=1 00:05:46.533 --rc geninfo_all_blocks=1 00:05:46.533 --rc geninfo_unexecuted_blocks=1 00:05:46.533 00:05:46.533 ' 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.533 --rc genhtml_branch_coverage=1 00:05:46.533 --rc genhtml_function_coverage=1 00:05:46.533 --rc genhtml_legend=1 00:05:46.533 --rc geninfo_all_blocks=1 00:05:46.533 --rc geninfo_unexecuted_blocks=1 00:05:46.533 00:05:46.533 ' 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.533 --rc genhtml_branch_coverage=1 00:05:46.533 --rc genhtml_function_coverage=1 00:05:46.533 --rc genhtml_legend=1 00:05:46.533 --rc geninfo_all_blocks=1 00:05:46.533 --rc geninfo_unexecuted_blocks=1 00:05:46.533 00:05:46.533 ' 00:05:46.533 07:37:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.533 07:37:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.533 ************************************ 00:05:46.533 START TEST thread_poller_perf 00:05:46.533 ************************************ 00:05:46.533 07:37:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.533 [2024-11-29 07:37:36.313536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:46.533 [2024-11-29 07:37:36.313944] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59406 ] 00:05:46.533 [2024-11-29 07:37:36.467931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.794 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:46.794 [2024-11-29 07:37:36.563023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.182 [2024-11-29T07:37:38.126Z] ====================================== 00:05:48.182 [2024-11-29T07:37:38.126Z] busy:2614684390 (cyc) 00:05:48.182 [2024-11-29T07:37:38.126Z] total_run_count: 306000 00:05:48.182 [2024-11-29T07:37:38.126Z] tsc_hz: 2600000000 (cyc) 00:05:48.182 [2024-11-29T07:37:38.126Z] ====================================== 00:05:48.182 [2024-11-29T07:37:38.126Z] poller_cost: 8544 (cyc), 3286 (nsec) 00:05:48.182 00:05:48.182 real 0m1.436s 00:05:48.182 user 0m1.267s 00:05:48.182 sys 0m0.061s 00:05:48.182 07:37:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.182 07:37:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.182 ************************************ 00:05:48.182 END TEST thread_poller_perf 00:05:48.182 ************************************ 00:05:48.182 07:37:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.182 07:37:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:48.182 07:37:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.182 07:37:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.182 ************************************ 00:05:48.182 START TEST thread_poller_perf 00:05:48.182 ************************************ 00:05:48.182 07:37:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.182 [2024-11-29 07:37:37.807474] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:48.182 [2024-11-29 07:37:37.807579] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59437 ] 00:05:48.182 [2024-11-29 07:37:37.967806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.182 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:48.182 [2024-11-29 07:37:38.062548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.568 [2024-11-29T07:37:39.512Z] ====================================== 00:05:49.568 [2024-11-29T07:37:39.512Z] busy:2603112724 (cyc) 00:05:49.568 [2024-11-29T07:37:39.512Z] total_run_count: 4009000 00:05:49.568 [2024-11-29T07:37:39.512Z] tsc_hz: 2600000000 (cyc) 00:05:49.568 [2024-11-29T07:37:39.512Z] ====================================== 00:05:49.568 [2024-11-29T07:37:39.512Z] poller_cost: 649 (cyc), 249 (nsec) 00:05:49.568 00:05:49.568 real 0m1.402s 00:05:49.568 user 0m1.238s 00:05:49.568 sys 0m0.057s 00:05:49.568 07:37:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.568 07:37:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.568 ************************************ 00:05:49.568 END TEST thread_poller_perf 00:05:49.568 ************************************ 00:05:49.568 07:37:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:49.568 00:05:49.568 real 0m3.077s 00:05:49.568 user 0m2.610s 00:05:49.568 sys 0m0.237s 00:05:49.568 07:37:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.568 ************************************ 00:05:49.568 END TEST thread 00:05:49.568 ************************************ 00:05:49.568 07:37:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.568 07:37:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:49.568 07:37:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:49.568 07:37:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.568 07:37:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.569 07:37:39 -- common/autotest_common.sh@10 -- # set +x 00:05:49.569 ************************************ 00:05:49.569 START TEST app_cmdline 00:05:49.569 ************************************ 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:49.569 * Looking for test storage... 00:05:49.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:49.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.569 07:37:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.569 --rc genhtml_branch_coverage=1 00:05:49.569 --rc genhtml_function_coverage=1 00:05:49.569 --rc genhtml_legend=1 00:05:49.569 --rc geninfo_all_blocks=1 00:05:49.569 --rc geninfo_unexecuted_blocks=1 00:05:49.569 00:05:49.569 ' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.569 --rc genhtml_branch_coverage=1 00:05:49.569 --rc genhtml_function_coverage=1 00:05:49.569 --rc genhtml_legend=1 00:05:49.569 --rc geninfo_all_blocks=1 00:05:49.569 --rc geninfo_unexecuted_blocks=1 00:05:49.569 00:05:49.569 ' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.569 --rc genhtml_branch_coverage=1 00:05:49.569 --rc genhtml_function_coverage=1 00:05:49.569 --rc genhtml_legend=1 00:05:49.569 --rc geninfo_all_blocks=1 00:05:49.569 --rc geninfo_unexecuted_blocks=1 00:05:49.569 00:05:49.569 ' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.569 --rc genhtml_branch_coverage=1 00:05:49.569 --rc genhtml_function_coverage=1 00:05:49.569 --rc genhtml_legend=1 00:05:49.569 --rc geninfo_all_blocks=1 00:05:49.569 --rc geninfo_unexecuted_blocks=1 00:05:49.569 00:05:49.569 ' 00:05:49.569 07:37:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:49.569 07:37:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59526 00:05:49.569 07:37:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59526 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59526 ']' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.569 07:37:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.569 07:37:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:49.569 [2024-11-29 07:37:39.445818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:49.569 [2024-11-29 07:37:39.445911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:05:49.831 [2024-11-29 07:37:39.599480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.831 [2024-11-29 07:37:39.696871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.402 07:37:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.402 07:37:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:50.402 07:37:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:50.660 { 00:05:50.660 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:05:50.660 "fields": { 00:05:50.660 "major": 25, 00:05:50.660 "minor": 1, 00:05:50.660 "patch": 0, 00:05:50.660 "suffix": "-pre", 00:05:50.660 "commit": "35cd3e84d" 00:05:50.660 } 00:05:50.660 } 00:05:50.660 07:37:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:50.661 07:37:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:50.661 07:37:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.921 request: 00:05:50.921 { 00:05:50.921 "method": "env_dpdk_get_mem_stats", 00:05:50.921 "req_id": 1 00:05:50.921 } 00:05:50.921 Got JSON-RPC error response 00:05:50.921 response: 00:05:50.921 { 00:05:50.921 "code": -32601, 00:05:50.921 "message": "Method not found" 00:05:50.921 } 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.921 07:37:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59526 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59526 ']' 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59526 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59526 00:05:50.921 killing process with pid 59526 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59526' 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 59526 00:05:50.921 07:37:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 59526 00:05:52.310 00:05:52.310 real 0m2.821s 00:05:52.310 user 0m3.064s 00:05:52.310 sys 0m0.400s 00:05:52.310 07:37:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.310 ************************************ 00:05:52.310 END TEST app_cmdline 00:05:52.310 ************************************ 00:05:52.310 07:37:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.310 07:37:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:52.310 07:37:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.310 07:37:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.310 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.310 ************************************ 00:05:52.310 START TEST version 00:05:52.310 ************************************ 00:05:52.310 07:37:42 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:52.310 * Looking for test storage... 00:05:52.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:52.310 07:37:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.311 07:37:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.311 07:37:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.311 07:37:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.311 07:37:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.311 07:37:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.311 07:37:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.311 07:37:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.311 07:37:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.311 07:37:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.311 07:37:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.311 07:37:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.311 07:37:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.311 07:37:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.311 07:37:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.311 07:37:42 version -- scripts/common.sh@344 -- # case "$op" in 00:05:52.311 07:37:42 version -- scripts/common.sh@345 -- # : 1 00:05:52.311 07:37:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.311 07:37:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.311 07:37:42 version -- scripts/common.sh@365 -- # decimal 1 00:05:52.311 07:37:42 version -- scripts/common.sh@353 -- # local d=1 00:05:52.311 07:37:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.311 07:37:42 version -- scripts/common.sh@355 -- # echo 1 00:05:52.311 07:37:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.311 07:37:42 version -- scripts/common.sh@366 -- # decimal 2 00:05:52.311 07:37:42 version -- scripts/common.sh@353 -- # local d=2 00:05:52.311 07:37:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.311 07:37:42 version -- scripts/common.sh@355 -- # echo 2 00:05:52.311 07:37:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.311 07:37:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.571 07:37:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.571 07:37:42 version -- scripts/common.sh@368 -- # return 0 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 version -- app/version.sh@17 -- # get_header_version major 00:05:52.571 07:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # cut -f2 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.571 07:37:42 version -- app/version.sh@17 -- # major=25 00:05:52.571 07:37:42 version -- app/version.sh@18 -- # get_header_version minor 00:05:52.571 07:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # cut -f2 00:05:52.571 07:37:42 version -- app/version.sh@18 -- # minor=1 00:05:52.571 07:37:42 version -- app/version.sh@19 -- # get_header_version patch 00:05:52.571 07:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # cut -f2 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.571 07:37:42 version -- app/version.sh@19 -- # patch=0 00:05:52.571 07:37:42 version -- app/version.sh@20 -- # get_header_version suffix 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.571 07:37:42 version -- app/version.sh@14 -- # cut -f2 00:05:52.571 07:37:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.571 07:37:42 version -- app/version.sh@20 -- # suffix=-pre 00:05:52.571 07:37:42 version -- app/version.sh@22 -- # version=25.1 00:05:52.571 07:37:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:52.571 07:37:42 version -- app/version.sh@28 -- # version=25.1rc0 00:05:52.571 07:37:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:52.571 07:37:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:52.571 07:37:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:52.571 07:37:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:52.571 00:05:52.571 real 0m0.191s 00:05:52.571 user 0m0.127s 00:05:52.571 sys 0m0.086s 00:05:52.571 ************************************ 00:05:52.571 END TEST version 00:05:52.571 ************************************ 00:05:52.571 07:37:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.571 07:37:42 version -- common/autotest_common.sh@10 -- # set +x 00:05:52.571 07:37:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:52.571 07:37:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:52.571 07:37:42 -- spdk/autotest.sh@194 -- # uname -s 00:05:52.571 07:37:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:52.571 07:37:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.571 07:37:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.571 07:37:42 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:52.571 07:37:42 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:52.571 07:37:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.571 07:37:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.571 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.571 ************************************ 00:05:52.571 START TEST blockdev_nvme 00:05:52.571 ************************************ 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:52.571 * Looking for test storage... 00:05:52.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.571 07:37:42 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.571 00:05:52.571 ' 00:05:52.571 07:37:42 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.571 --rc genhtml_branch_coverage=1 00:05:52.571 --rc genhtml_function_coverage=1 00:05:52.571 --rc genhtml_legend=1 00:05:52.571 --rc geninfo_all_blocks=1 00:05:52.571 --rc geninfo_unexecuted_blocks=1 00:05:52.572 00:05:52.572 ' 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.572 --rc genhtml_branch_coverage=1 00:05:52.572 --rc genhtml_function_coverage=1 00:05:52.572 --rc genhtml_legend=1 00:05:52.572 --rc geninfo_all_blocks=1 00:05:52.572 --rc geninfo_unexecuted_blocks=1 00:05:52.572 00:05:52.572 ' 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:52.572 07:37:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59698 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59698 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59698 ']' 00:05:52.572 07:37:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.572 07:37:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.831 [2024-11-29 07:37:42.580709] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:52.831 [2024-11-29 07:37:42.580833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:05:52.831 [2024-11-29 07:37:42.738871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.088 [2024-11-29 07:37:42.837609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.658 07:37:43 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.658 07:37:43 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:53.658 07:37:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:53.658 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.658 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 07:37:43 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.917 07:37:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:54.178 07:37:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:54.179 07:37:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f0bc1b86-e530-4b30-9c9d-e6652b707b00"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f0bc1b86-e530-4b30-9c9d-e6652b707b00",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b199e883-7c10-48e9-afd8-248f500bcc1e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b199e883-7c10-48e9-afd8-248f500bcc1e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c125ff38-ef0e-4015-8dd7-a89a52eca4cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c125ff38-ef0e-4015-8dd7-a89a52eca4cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "24c541ce-4742-4718-84ae-3a822f49b53e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "24c541ce-4742-4718-84ae-3a822f49b53e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d5a3f8ea-4a5c-4a7a-b5c3-5b8e25141c86"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5a3f8ea-4a5c-4a7a-b5c3-5b8e25141c86",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "838339d4-ac03-4675-adf4-67c469e67175"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "838339d4-ac03-4675-adf4-67c469e67175",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:54.179 07:37:43 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:54.179 07:37:43 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:54.179 07:37:43 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:54.179 07:37:43 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59698 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59698 ']' 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59698 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59698 00:05:54.179 killing process with pid 59698 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59698' 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59698 00:05:54.179 07:37:43 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59698 00:05:55.576 07:37:45 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:55.576 07:37:45 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:55.576 07:37:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:55.576 07:37:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.576 07:37:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.576 ************************************ 00:05:55.576 START TEST bdev_hello_world 00:05:55.576 ************************************ 00:05:55.576 07:37:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:55.577 [2024-11-29 07:37:45.471801] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:55.577 [2024-11-29 07:37:45.472021] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59777 ] 00:05:55.837 [2024-11-29 07:37:45.631722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.837 [2024-11-29 07:37:45.726944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.407 [2024-11-29 07:37:46.273211] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:56.407 [2024-11-29 07:37:46.273252] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:56.407 [2024-11-29 07:37:46.273269] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:56.407 [2024-11-29 07:37:46.275709] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:56.407 [2024-11-29 07:37:46.276421] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:56.407 [2024-11-29 07:37:46.276543] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:56.407 [2024-11-29 07:37:46.277139] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:56.407 00:05:56.407 [2024-11-29 07:37:46.277164] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:57.345 00:05:57.345 real 0m1.559s 00:05:57.345 user 0m1.284s 00:05:57.345 sys 0m0.167s 00:05:57.345 07:37:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.345 07:37:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:57.345 ************************************ 00:05:57.345 END TEST bdev_hello_world 00:05:57.345 ************************************ 00:05:57.345 07:37:47 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:57.345 07:37:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.345 07:37:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.345 07:37:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.345 ************************************ 00:05:57.345 START TEST bdev_bounds 00:05:57.345 ************************************ 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:57.345 Process bdevio pid: 59813 00:05:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59813 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59813' 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59813 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59813 ']' 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.345 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:57.345 [2024-11-29 07:37:47.078807] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:57.345 [2024-11-29 07:37:47.078928] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:05:57.345 [2024-11-29 07:37:47.234820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.604 [2024-11-29 07:37:47.315241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.604 [2024-11-29 07:37:47.315528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.604 [2024-11-29 07:37:47.315547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.170 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.170 07:37:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:58.170 07:37:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:58.170 I/O targets: 00:05:58.170 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:58.170 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:58.170 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.170 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.170 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.170 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:58.170 00:05:58.170 00:05:58.170 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.170 http://cunit.sourceforge.net/ 00:05:58.170 00:05:58.170 00:05:58.170 Suite: bdevio tests on: Nvme3n1 00:05:58.170 Test: blockdev write read block ...passed 00:05:58.170 Test: blockdev write zeroes read block ...passed 00:05:58.170 Test: blockdev write zeroes read no split ...passed 00:05:58.170 Test: blockdev write zeroes read split ...passed 00:05:58.170 Test: blockdev write zeroes read split partial ...passed 00:05:58.170 Test: blockdev reset ...[2024-11-29 07:37:48.046810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:58.170 [2024-11-29 07:37:48.049642] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:58.170 passed 00:05:58.170 Test: blockdev write read 8 blocks ...passed 00:05:58.170 Test: blockdev write read size > 128k ...passed 00:05:58.170 Test: blockdev write read invalid size ...passed 00:05:58.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.170 Test: blockdev write read max offset ...passed 00:05:58.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.170 Test: blockdev writev readv 8 blocks ...passed 00:05:58.170 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.170 Test: blockdev writev readv block ...passed 00:05:58.170 Test: blockdev writev readv size > 128k ...passed 00:05:58.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.170 Test: blockdev comparev and writev ...[2024-11-29 07:37:48.057078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be20a000 len:0x1000 00:05:58.170 [2024-11-29 07:37:48.057218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.170 passed 00:05:58.170 Test: blockdev nvme passthru rw ...passed 00:05:58.170 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:37:48.057909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.170 [2024-11-29 07:37:48.058011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:58.170 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:58.170 passed 00:05:58.170 Test: blockdev copy ...passed 00:05:58.170 Suite: bdevio tests on: Nvme2n3 00:05:58.170 Test: blockdev write read block ...passed 00:05:58.170 Test: blockdev write zeroes read block ...passed 00:05:58.170 Test: blockdev write zeroes read no split ...passed 00:05:58.170 Test: blockdev write zeroes read split ...passed 00:05:58.170 Test: blockdev write zeroes read split partial ...passed 00:05:58.170 Test: blockdev reset ...[2024-11-29 07:37:48.109115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.170 [2024-11-29 07:37:48.112082] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:58.170 passed 00:05:58.170 Test: blockdev write read 8 blocks ...passed 00:05:58.170 Test: blockdev write read size > 128k ...passed 00:05:58.170 Test: blockdev write read invalid size ...passed 00:05:58.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.429 Test: blockdev write read max offset ...passed 00:05:58.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.429 Test: blockdev writev readv 8 blocks ...passed 00:05:58.429 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.429 Test: blockdev writev readv block ...passed 00:05:58.429 Test: blockdev writev readv size > 128k ...passed 00:05:58.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.429 Test: blockdev comparev and writev ...[2024-11-29 07:37:48.119027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a0c06000 len:0x1000 00:05:58.429 [2024-11-29 07:37:48.119074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.429 passed 00:05:58.429 Test: blockdev nvme passthru rw ...passed 00:05:58.429 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:37:48.119597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:58.429 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:58.429 [2024-11-29 07:37:48.119703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.429 passed 00:05:58.429 Test: blockdev copy ...passed 00:05:58.429 Suite: bdevio tests on: Nvme2n2 00:05:58.429 Test: blockdev write read block ...passed 00:05:58.429 Test: blockdev write zeroes read block ...passed 00:05:58.429 Test: blockdev write zeroes read no split ...passed 00:05:58.429 Test: blockdev write zeroes read split ...passed 00:05:58.429 Test: blockdev write zeroes read split partial ...passed 00:05:58.429 Test: blockdev reset ...[2024-11-29 07:37:48.171337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.429 passed 00:05:58.429 Test: blockdev write read 8 blocks ...[2024-11-29 07:37:48.173982] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:58.429 passed 00:05:58.429 Test: blockdev write read size > 128k ...passed 00:05:58.429 Test: blockdev write read invalid size ...passed 00:05:58.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.429 Test: blockdev write read max offset ...passed 00:05:58.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.429 Test: blockdev writev readv 8 blocks ...passed 00:05:58.429 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.429 Test: blockdev writev readv block ...passed 00:05:58.429 Test: blockdev writev readv size > 128k ...passed 00:05:58.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.429 Test: blockdev comparev and writev ...[2024-11-29 07:37:48.180058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8e3c000 len:0x1000 00:05:58.429 [2024-11-29 07:37:48.180098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.429 passed 00:05:58.429 Test: blockdev nvme passthru rw ...passed 00:05:58.429 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.429 Test: blockdev nvme admin passthru ...[2024-11-29 07:37:48.180777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.429 [2024-11-29 07:37:48.180805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev copy ...passed 00:05:58.430 Suite: bdevio tests on: Nvme2n1 00:05:58.430 Test: blockdev write read block ...passed 00:05:58.430 Test: blockdev write zeroes read block ...passed 00:05:58.430 Test: blockdev write zeroes read no split ...passed 00:05:58.430 Test: blockdev write zeroes read split ...passed 00:05:58.430 Test: blockdev write zeroes read split partial ...passed 00:05:58.430 Test: blockdev reset ...[2024-11-29 07:37:48.219534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.430 [2024-11-29 07:37:48.222206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:58.430 Test: blockdev write read 8 blocks ...uccessful. 00:05:58.430 passed 00:05:58.430 Test: blockdev write read size > 128k ...passed 00:05:58.430 Test: blockdev write read invalid size ...passed 00:05:58.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.430 Test: blockdev write read max offset ...passed 00:05:58.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.430 Test: blockdev writev readv 8 blocks ...passed 00:05:58.430 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.430 Test: blockdev writev readv block ...passed 00:05:58.430 Test: blockdev writev readv size > 128k ...passed 00:05:58.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.430 Test: blockdev comparev and writev ...[2024-11-29 07:37:48.229536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8e38000 len:0x1000 00:05:58.430 [2024-11-29 07:37:48.229675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev nvme passthru rw ...passed 00:05:58.430 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:37:48.230716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.430 [2024-11-29 07:37:48.230814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev nvme admin passthru ...passed 00:05:58.430 Test: blockdev copy ...passed 00:05:58.430 Suite: bdevio tests on: Nvme1n1 00:05:58.430 Test: blockdev write read block ...passed 00:05:58.430 Test: blockdev write zeroes read block ...passed 00:05:58.430 Test: blockdev write zeroes read no split ...passed 00:05:58.430 Test: blockdev write zeroes read split ...passed 00:05:58.430 Test: blockdev write zeroes read split partial ...passed 00:05:58.430 Test: blockdev reset ...[2024-11-29 07:37:48.286704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:58.430 [2024-11-29 07:37:48.289201] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:58.430 Test: blockdev write read 8 blocks ...uccessful. 00:05:58.430 passed 00:05:58.430 Test: blockdev write read size > 128k ...passed 00:05:58.430 Test: blockdev write read invalid size ...passed 00:05:58.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.430 Test: blockdev write read max offset ...passed 00:05:58.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.430 Test: blockdev writev readv 8 blocks ...passed 00:05:58.430 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.430 Test: blockdev writev readv block ...passed 00:05:58.430 Test: blockdev writev readv size > 128k ...passed 00:05:58.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.430 Test: blockdev comparev and writev ...[2024-11-29 07:37:48.295862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8e34000 len:0x1000 00:05:58.430 [2024-11-29 07:37:48.295902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev nvme passthru rw ...passed 00:05:58.430 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.430 Test: blockdev nvme admin passthru ...[2024-11-29 07:37:48.296427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.430 [2024-11-29 07:37:48.296468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev copy ...passed 00:05:58.430 Suite: bdevio tests on: Nvme0n1 00:05:58.430 Test: blockdev write read block ...passed 00:05:58.430 Test: blockdev write zeroes read block ...passed 00:05:58.430 Test: blockdev write zeroes read no split ...passed 00:05:58.430 Test: blockdev write zeroes read split ...passed 00:05:58.430 Test: blockdev write zeroes read split partial ...passed 00:05:58.430 Test: blockdev reset ...[2024-11-29 07:37:48.343383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:58.430 [2024-11-29 07:37:48.345763] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:58.430 passed 00:05:58.430 Test: blockdev write read 8 blocks ...passed 00:05:58.430 Test: blockdev write read size > 128k ...passed 00:05:58.430 Test: blockdev write read invalid size ...passed 00:05:58.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.430 Test: blockdev write read max offset ...passed 00:05:58.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.430 Test: blockdev writev readv 8 blocks ...passed 00:05:58.430 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.430 Test: blockdev writev readv block ...passed 00:05:58.430 Test: blockdev writev readv size > 128k ...passed 00:05:58.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.430 Test: blockdev comparev and writev ...passed 00:05:58.430 Test: blockdev nvme passthru rw ...[2024-11-29 07:37:48.352776] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:58.430 separate metadata which is not supported yet. 00:05:58.430 passed 00:05:58.430 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:37:48.353372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:58.430 [2024-11-29 07:37:48.353500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:05:58.430 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:05:58.430 passed 00:05:58.430 Test: blockdev copy ...passed 00:05:58.430 00:05:58.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.430 suites 6 6 n/a 0 0 00:05:58.430 tests 138 138 138 0 0 00:05:58.430 asserts 893 893 893 0 n/a 00:05:58.430 00:05:58.430 Elapsed time = 0.928 seconds 00:05:58.430 0 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59813 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59813 ']' 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59813 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59813 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59813' 00:05:58.688 killing process with pid 59813 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59813 00:05:58.688 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59813 00:05:59.255 07:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:59.255 ************************************ 00:05:59.255 END TEST bdev_bounds 00:05:59.255 ************************************ 00:05:59.255 00:05:59.255 real 0m1.919s 00:05:59.255 user 0m4.945s 00:05:59.255 sys 0m0.255s 00:05:59.255 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.255 07:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:59.255 07:37:48 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:59.255 07:37:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:59.255 07:37:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.255 07:37:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.255 ************************************ 00:05:59.255 START TEST bdev_nbd 00:05:59.255 ************************************ 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:59.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59867 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59867 /var/tmp/spdk-nbd.sock 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59867 ']' 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.255 07:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:59.255 [2024-11-29 07:37:49.057281] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:59.255 [2024-11-29 07:37:49.057535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.512 [2024-11-29 07:37:49.214832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.512 [2024-11-29 07:37:49.291654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:00.082 07:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.342 1+0 records in 00:06:00.342 1+0 records out 00:06:00.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864958 s, 4.7 MB/s 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:00.342 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.602 1+0 records in 00:06:00.602 1+0 records out 00:06:00.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658828 s, 6.2 MB/s 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:00.602 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.862 1+0 records in 00:06:00.862 1+0 records out 00:06:00.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100242 s, 4.1 MB/s 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:00.862 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.123 1+0 records in 00:06:01.123 1+0 records out 00:06:01.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797912 s, 5.1 MB/s 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.123 07:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.123 1+0 records in 00:06:01.123 1+0 records out 00:06:01.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107352 s, 3.8 MB/s 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.123 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.383 1+0 records in 00:06:01.383 1+0 records out 00:06:01.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995474 s, 4.1 MB/s 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.383 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd0", 00:06:01.644 "bdev_name": "Nvme0n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd1", 00:06:01.644 "bdev_name": "Nvme1n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd2", 00:06:01.644 "bdev_name": "Nvme2n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd3", 00:06:01.644 "bdev_name": "Nvme2n2" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd4", 00:06:01.644 "bdev_name": "Nvme2n3" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd5", 00:06:01.644 "bdev_name": "Nvme3n1" 00:06:01.644 } 00:06:01.644 ]' 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd0", 00:06:01.644 "bdev_name": "Nvme0n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd1", 00:06:01.644 "bdev_name": "Nvme1n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd2", 00:06:01.644 "bdev_name": "Nvme2n1" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd3", 00:06:01.644 "bdev_name": "Nvme2n2" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd4", 00:06:01.644 "bdev_name": "Nvme2n3" 00:06:01.644 }, 00:06:01.644 { 00:06:01.644 "nbd_device": "/dev/nbd5", 00:06:01.644 "bdev_name": "Nvme3n1" 00:06:01.644 } 00:06:01.644 ]' 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.644 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.905 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.165 07:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.425 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.686 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:02.949 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:02.949 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:02.949 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:02.949 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.950 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.210 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.210 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.210 07:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:03.210 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:03.469 /dev/nbd0 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:03.469 1+0 records in 00:06:03.469 1+0 records out 00:06:03.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945351 s, 4.3 MB/s 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:03.469 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:03.728 /dev/nbd1 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:03.728 1+0 records in 00:06:03.728 1+0 records out 00:06:03.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139255 s, 2.9 MB/s 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:03.728 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:03.986 /dev/nbd10 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:03.986 1+0 records in 00:06:03.986 1+0 records out 00:06:03.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596599 s, 6.9 MB/s 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:03.986 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:03.986 /dev/nbd11 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.244 1+0 records in 00:06:04.244 1+0 records out 00:06:04.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257556 s, 15.9 MB/s 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.244 07:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:04.244 /dev/nbd12 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.244 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.244 1+0 records in 00:06:04.244 1+0 records out 00:06:04.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503498 s, 8.1 MB/s 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:04.502 /dev/nbd13 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.502 1+0 records in 00:06:04.502 1+0 records out 00:06:04.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572368 s, 7.2 MB/s 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.502 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.503 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd0", 00:06:04.761 "bdev_name": "Nvme0n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd1", 00:06:04.761 "bdev_name": "Nvme1n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd10", 00:06:04.761 "bdev_name": "Nvme2n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd11", 00:06:04.761 "bdev_name": "Nvme2n2" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd12", 00:06:04.761 "bdev_name": "Nvme2n3" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd13", 00:06:04.761 "bdev_name": "Nvme3n1" 00:06:04.761 } 00:06:04.761 ]' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd0", 00:06:04.761 "bdev_name": "Nvme0n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd1", 00:06:04.761 "bdev_name": "Nvme1n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd10", 00:06:04.761 "bdev_name": "Nvme2n1" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd11", 00:06:04.761 "bdev_name": "Nvme2n2" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd12", 00:06:04.761 "bdev_name": "Nvme2n3" 00:06:04.761 }, 00:06:04.761 { 00:06:04.761 "nbd_device": "/dev/nbd13", 00:06:04.761 "bdev_name": "Nvme3n1" 00:06:04.761 } 00:06:04.761 ]' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.761 /dev/nbd1 00:06:04.761 /dev/nbd10 00:06:04.761 /dev/nbd11 00:06:04.761 /dev/nbd12 00:06:04.761 /dev/nbd13' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.761 /dev/nbd1 00:06:04.761 /dev/nbd10 00:06:04.761 /dev/nbd11 00:06:04.761 /dev/nbd12 00:06:04.761 /dev/nbd13' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:04.761 256+0 records in 00:06:04.761 256+0 records out 00:06:04.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0086928 s, 121 MB/s 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.761 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.019 256+0 records in 00:06:05.019 256+0 records out 00:06:05.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0524559 s, 20.0 MB/s 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.019 256+0 records in 00:06:05.019 256+0 records out 00:06:05.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0514199 s, 20.4 MB/s 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:05.019 256+0 records in 00:06:05.019 256+0 records out 00:06:05.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.052027 s, 20.2 MB/s 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:05.019 256+0 records in 00:06:05.019 256+0 records out 00:06:05.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0546758 s, 19.2 MB/s 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:05.019 256+0 records in 00:06:05.019 256+0 records out 00:06:05.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0509541 s, 20.6 MB/s 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.019 07:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:05.278 256+0 records in 00:06:05.278 256+0 records out 00:06:05.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.051491 s, 20.4 MB/s 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.278 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.536 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.794 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.052 07:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.310 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.568 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:06.827 malloc_lvol_verify 00:06:06.827 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:07.085 1a273471-004a-4235-9404-edb2f447d5aa 00:06:07.085 07:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:07.343 f9f97686-5ba2-4186-bfa4-3fcb2210aa82 00:06:07.343 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:07.601 /dev/nbd0 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:07.601 mke2fs 1.47.0 (5-Feb-2023) 00:06:07.601 Discarding device blocks: 0/4096 done 00:06:07.601 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:07.601 00:06:07.601 Allocating group tables: 0/1 done 00:06:07.601 Writing inode tables: 0/1 done 00:06:07.601 Creating journal (1024 blocks): done 00:06:07.601 Writing superblocks and filesystem accounting information: 0/1 done 00:06:07.601 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.601 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59867 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59867 ']' 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59867 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59867 00:06:07.859 killing process with pid 59867 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59867' 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59867 00:06:07.859 07:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59867 00:06:08.427 07:37:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:08.427 00:06:08.427 real 0m9.198s 00:06:08.427 user 0m13.444s 00:06:08.427 sys 0m2.909s 00:06:08.427 07:37:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.427 07:37:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:08.427 ************************************ 00:06:08.427 END TEST bdev_nbd 00:06:08.427 ************************************ 00:06:08.427 07:37:58 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:08.427 07:37:58 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:08.427 skipping fio tests on NVMe due to multi-ns failures. 00:06:08.427 07:37:58 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:08.427 07:37:58 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:08.427 07:37:58 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:08.427 07:37:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:08.427 07:37:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.427 07:37:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.427 ************************************ 00:06:08.427 START TEST bdev_verify 00:06:08.427 ************************************ 00:06:08.427 07:37:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:08.427 [2024-11-29 07:37:58.315961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:08.427 [2024-11-29 07:37:58.316070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60234 ] 00:06:08.685 [2024-11-29 07:37:58.471132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.685 [2024-11-29 07:37:58.548538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.685 [2024-11-29 07:37:58.548544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.251 Running I/O for 5 seconds... 00:06:11.574 20032.00 IOPS, 78.25 MiB/s [2024-11-29T07:38:02.458Z] 20320.00 IOPS, 79.38 MiB/s [2024-11-29T07:38:03.456Z] 20160.00 IOPS, 78.75 MiB/s [2024-11-29T07:38:04.397Z] 20624.00 IOPS, 80.56 MiB/s [2024-11-29T07:38:04.397Z] 20544.00 IOPS, 80.25 MiB/s 00:06:14.453 Latency(us) 00:06:14.453 [2024-11-29T07:38:04.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0xbd0bd 00:06:14.454 Nvme0n1 : 5.08 1714.33 6.70 0.00 0.00 74525.56 10889.06 64124.46 00:06:14.454 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:14.454 Nvme0n1 : 5.07 1667.89 6.52 0.00 0.00 76549.07 14014.62 66947.54 00:06:14.454 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0xa0000 00:06:14.454 Nvme1n1 : 5.08 1713.30 6.69 0.00 0.00 74467.30 12703.90 60898.07 00:06:14.454 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0xa0000 length 0xa0000 00:06:14.454 Nvme1n1 : 5.07 1667.40 6.51 0.00 0.00 76398.46 17140.18 65334.35 00:06:14.454 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0x80000 00:06:14.454 Nvme2n1 : 5.08 1712.85 6.69 0.00 0.00 74384.17 11241.94 60091.47 00:06:14.454 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x80000 length 0x80000 00:06:14.454 Nvme2n1 : 5.07 1666.87 6.51 0.00 0.00 76261.61 17845.96 63721.16 00:06:14.454 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0x80000 00:06:14.454 Nvme2n2 : 5.08 1712.40 6.69 0.00 0.00 74271.25 11443.59 60898.07 00:06:14.454 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x80000 length 0x80000 00:06:14.454 Nvme2n2 : 5.07 1666.29 6.51 0.00 0.00 76127.62 14821.22 64931.05 00:06:14.454 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0x80000 00:06:14.454 Nvme2n3 : 5.08 1711.96 6.69 0.00 0.00 74154.32 11796.48 62107.96 00:06:14.454 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x80000 length 0x80000 00:06:14.454 Nvme2n3 : 5.07 1665.70 6.51 0.00 0.00 76003.93 10032.05 66140.95 00:06:14.454 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x0 length 0x20000 00:06:14.454 Nvme3n1 : 5.09 1711.51 6.69 0.00 0.00 74035.37 7461.02 64124.46 00:06:14.454 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:14.454 Verification LBA range: start 0x20000 length 0x20000 00:06:14.454 Nvme3n1 : 5.07 1665.13 6.50 0.00 0.00 75957.69 8116.38 68560.74 00:06:14.454 [2024-11-29T07:38:04.398Z] =================================================================================================================== 00:06:14.454 [2024-11-29T07:38:04.398Z] Total : 20275.65 79.20 0.00 0.00 75247.11 7461.02 68560.74 00:06:15.837 00:06:15.837 real 0m7.352s 00:06:15.837 user 0m13.834s 00:06:15.837 sys 0m0.195s 00:06:15.837 07:38:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.837 ************************************ 00:06:15.837 END TEST bdev_verify 00:06:15.837 ************************************ 00:06:15.837 07:38:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:15.837 07:38:05 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:15.837 07:38:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:15.837 07:38:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.837 07:38:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.837 ************************************ 00:06:15.837 START TEST bdev_verify_big_io 00:06:15.837 ************************************ 00:06:15.837 07:38:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:15.837 [2024-11-29 07:38:05.728831] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:15.837 [2024-11-29 07:38:05.728947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:06:16.099 [2024-11-29 07:38:05.891084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.099 [2024-11-29 07:38:05.989183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.099 [2024-11-29 07:38:05.989269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.042 Running I/O for 5 seconds... 00:06:21.522 728.00 IOPS, 45.50 MiB/s [2024-11-29T07:38:12.842Z] 1901.00 IOPS, 118.81 MiB/s [2024-11-29T07:38:12.842Z] 2795.00 IOPS, 174.69 MiB/s 00:06:22.898 Latency(us) 00:06:22.898 [2024-11-29T07:38:12.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0xbd0b 00:06:22.898 Nvme0n1 : 5.78 105.62 6.60 0.00 0.00 1137153.23 16131.94 1226027.32 00:06:22.898 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:22.898 Nvme0n1 : 5.48 116.70 7.29 0.00 0.00 1054403.90 24399.56 1226027.32 00:06:22.898 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0xa000 00:06:22.898 Nvme1n1 : 5.79 110.56 6.91 0.00 0.00 1071396.39 100824.62 1006632.96 00:06:22.898 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0xa000 length 0xa000 00:06:22.898 Nvme1n1 : 5.73 122.87 7.68 0.00 0.00 972629.29 80256.39 1019538.51 00:06:22.898 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0x8000 00:06:22.898 Nvme2n1 : 5.87 113.21 7.08 0.00 0.00 1010379.87 73803.62 942105.21 00:06:22.898 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x8000 length 0x8000 00:06:22.898 Nvme2n1 : 5.86 127.14 7.95 0.00 0.00 905395.03 53638.70 1013085.74 00:06:22.898 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0x8000 00:06:22.898 Nvme2n2 : 5.94 118.55 7.41 0.00 0.00 934907.70 70577.23 967916.31 00:06:22.898 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x8000 length 0x8000 00:06:22.898 Nvme2n2 : 5.86 131.03 8.19 0.00 0.00 852524.11 73803.62 1038896.84 00:06:22.898 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0x8000 00:06:22.898 Nvme2n3 : 6.02 127.52 7.97 0.00 0.00 841917.97 33877.07 993727.41 00:06:22.898 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x8000 length 0x8000 00:06:22.898 Nvme2n3 : 6.03 144.53 9.03 0.00 0.00 746085.27 14317.10 1071160.71 00:06:22.898 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x0 length 0x2000 00:06:22.898 Nvme3n1 : 6.08 137.99 8.62 0.00 0.00 752200.60 825.50 2155226.98 00:06:22.898 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.898 Verification LBA range: start 0x2000 length 0x2000 00:06:22.898 Nvme3n1 : 6.08 164.88 10.30 0.00 0.00 633918.57 447.41 1096971.82 00:06:22.898 [2024-11-29T07:38:12.842Z] =================================================================================================================== 00:06:22.898 [2024-11-29T07:38:12.842Z] Total : 1520.61 95.04 0.00 0.00 888894.83 447.41 2155226.98 00:06:24.821 00:06:24.821 real 0m8.640s 00:06:24.821 user 0m16.350s 00:06:24.821 sys 0m0.221s 00:06:24.821 ************************************ 00:06:24.821 07:38:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.821 07:38:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:24.821 END TEST bdev_verify_big_io 00:06:24.821 ************************************ 00:06:24.821 07:38:14 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.821 07:38:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:24.821 07:38:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.821 07:38:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.821 ************************************ 00:06:24.821 START TEST bdev_write_zeroes 00:06:24.821 ************************************ 00:06:24.821 07:38:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.821 [2024-11-29 07:38:14.432475] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.821 [2024-11-29 07:38:14.432590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 00:06:24.821 [2024-11-29 07:38:14.592155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.821 [2024-11-29 07:38:14.689847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.392 Running I/O for 1 seconds... 00:06:26.768 66048.00 IOPS, 258.00 MiB/s 00:06:26.768 Latency(us) 00:06:26.768 [2024-11-29T07:38:16.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.768 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme0n1 : 1.02 10990.95 42.93 0.00 0.00 11622.74 5520.15 23189.66 00:06:26.768 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme1n1 : 1.02 10978.53 42.88 0.00 0.00 11621.78 7511.43 21979.77 00:06:26.768 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme2n1 : 1.02 10966.16 42.84 0.00 0.00 11575.50 6654.42 19761.62 00:06:26.768 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme2n2 : 1.02 10953.73 42.79 0.00 0.00 11571.71 7612.26 19156.68 00:06:26.768 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme2n3 : 1.02 10941.46 42.74 0.00 0.00 11565.54 7612.26 19761.62 00:06:26.768 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:26.768 Nvme3n1 : 1.02 10929.22 42.69 0.00 0.00 11560.95 7511.43 21475.64 00:06:26.768 [2024-11-29T07:38:16.712Z] =================================================================================================================== 00:06:26.768 [2024-11-29T07:38:16.712Z] Total : 65760.05 256.88 0.00 0.00 11586.37 5520.15 23189.66 00:06:27.340 00:06:27.340 real 0m2.668s 00:06:27.340 user 0m2.381s 00:06:27.340 sys 0m0.172s 00:06:27.340 ************************************ 00:06:27.340 END TEST bdev_write_zeroes 00:06:27.340 ************************************ 00:06:27.340 07:38:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.340 07:38:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:27.340 07:38:17 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.340 07:38:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:27.340 07:38:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.340 07:38:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.340 ************************************ 00:06:27.340 START TEST bdev_json_nonenclosed 00:06:27.340 ************************************ 00:06:27.340 07:38:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.340 [2024-11-29 07:38:17.164902] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:27.340 [2024-11-29 07:38:17.165165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:06:27.602 [2024-11-29 07:38:17.325333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.602 [2024-11-29 07:38:17.424797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.602 [2024-11-29 07:38:17.424878] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:27.602 [2024-11-29 07:38:17.424894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:27.602 [2024-11-29 07:38:17.424903] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.863 00:06:27.863 real 0m0.498s 00:06:27.863 user 0m0.305s 00:06:27.863 sys 0m0.090s 00:06:27.863 07:38:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.863 ************************************ 00:06:27.863 END TEST bdev_json_nonenclosed 00:06:27.863 ************************************ 00:06:27.863 07:38:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:27.863 07:38:17 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.863 07:38:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:27.863 07:38:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.863 07:38:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.863 ************************************ 00:06:27.863 START TEST bdev_json_nonarray 00:06:27.863 ************************************ 00:06:27.863 07:38:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.863 [2024-11-29 07:38:17.725073] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:27.863 [2024-11-29 07:38:17.725331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:06:28.124 [2024-11-29 07:38:17.887040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.124 [2024-11-29 07:38:17.986423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.124 [2024-11-29 07:38:17.986528] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:28.124 [2024-11-29 07:38:17.986546] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:28.124 [2024-11-29 07:38:17.986555] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.387 00:06:28.387 real 0m0.501s 00:06:28.387 user 0m0.303s 00:06:28.387 sys 0m0.093s 00:06:28.387 ************************************ 00:06:28.387 END TEST bdev_json_nonarray 00:06:28.387 ************************************ 00:06:28.387 07:38:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.387 07:38:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:28.387 07:38:18 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:28.387 00:06:28.387 real 0m35.860s 00:06:28.387 user 0m55.981s 00:06:28.387 sys 0m4.844s 00:06:28.387 07:38:18 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.387 07:38:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.387 ************************************ 00:06:28.387 END TEST blockdev_nvme 00:06:28.387 ************************************ 00:06:28.387 07:38:18 -- spdk/autotest.sh@209 -- # uname -s 00:06:28.387 07:38:18 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:28.387 07:38:18 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:28.387 07:38:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.387 07:38:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.387 07:38:18 -- common/autotest_common.sh@10 -- # set +x 00:06:28.387 ************************************ 00:06:28.387 START TEST blockdev_nvme_gpt 00:06:28.387 ************************************ 00:06:28.387 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:28.647 * Looking for test storage... 00:06:28.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.647 07:38:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.647 --rc genhtml_branch_coverage=1 00:06:28.647 --rc genhtml_function_coverage=1 00:06:28.647 --rc genhtml_legend=1 00:06:28.647 --rc geninfo_all_blocks=1 00:06:28.647 --rc geninfo_unexecuted_blocks=1 00:06:28.647 00:06:28.647 ' 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.647 --rc genhtml_branch_coverage=1 00:06:28.647 --rc genhtml_function_coverage=1 00:06:28.647 --rc genhtml_legend=1 00:06:28.647 --rc geninfo_all_blocks=1 00:06:28.647 --rc geninfo_unexecuted_blocks=1 00:06:28.647 00:06:28.647 ' 00:06:28.647 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.648 --rc genhtml_branch_coverage=1 00:06:28.648 --rc genhtml_function_coverage=1 00:06:28.648 --rc genhtml_legend=1 00:06:28.648 --rc geninfo_all_blocks=1 00:06:28.648 --rc geninfo_unexecuted_blocks=1 00:06:28.648 00:06:28.648 ' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.648 --rc genhtml_branch_coverage=1 00:06:28.648 --rc genhtml_function_coverage=1 00:06:28.648 --rc genhtml_legend=1 00:06:28.648 --rc geninfo_all_blocks=1 00:06:28.648 --rc geninfo_unexecuted_blocks=1 00:06:28.648 00:06:28.648 ' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:28.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60598 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60598 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60598 ']' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.648 07:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:28.648 07:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:28.648 [2024-11-29 07:38:18.509526] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:28.648 [2024-11-29 07:38:18.509789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:06:28.909 [2024-11-29 07:38:18.665171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.909 [2024-11-29 07:38:18.762021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.481 07:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.481 07:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:29.481 07:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:29.481 07:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:29.481 07:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:29.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.003 Waiting for block devices as requested 00:06:30.003 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.003 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.003 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.543 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:35.543 BYT; 00:06:35.543 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:35.543 BYT; 00:06:35.543 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:35.543 07:38:25 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:35.543 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:35.544 07:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:36.478 The operation has completed successfully. 00:06:36.478 07:38:26 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:37.412 The operation has completed successfully. 00:06:37.412 07:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:37.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:38.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:38.237 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:38.237 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:38.495 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:38.495 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.495 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.495 [] 00:06:38.495 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:38.495 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:38.495 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.495 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:38.754 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.755 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:38.755 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:38.755 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "90abe877-f29e-4605-be54-ece720951017"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "90abe877-f29e-4605-be54-ece720951017",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "69f9e81a-b6b0-4f99-8d84-4cc25a42c658"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "69f9e81a-b6b0-4f99-8d84-4cc25a42c658",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b636d2a1-9a7e-4992-aa66-0035f659e576"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b636d2a1-9a7e-4992-aa66-0035f659e576",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "164cab87-7a06-44a9-a9f8-2af24a5aca7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "164cab87-7a06-44a9-a9f8-2af24a5aca7d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "def41c7e-7d54-4109-9458-7f968f13aaa6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "def41c7e-7d54-4109-9458-7f968f13aaa6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:38.755 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:38.755 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:39.014 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:39.014 07:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60598 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60598 ']' 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60598 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60598 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.014 killing process with pid 60598 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60598' 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60598 00:06:39.014 07:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60598 00:06:39.948 07:38:29 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:39.948 07:38:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:39.948 07:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:39.948 07:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.948 07:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.948 ************************************ 00:06:39.948 START TEST bdev_hello_world 00:06:39.948 ************************************ 00:06:39.948 07:38:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:40.206 [2024-11-29 07:38:29.942209] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:40.206 [2024-11-29 07:38:29.942298] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61213 ] 00:06:40.206 [2024-11-29 07:38:30.091470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.465 [2024-11-29 07:38:30.169190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.723 [2024-11-29 07:38:30.659474] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:40.723 [2024-11-29 07:38:30.659513] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:40.723 [2024-11-29 07:38:30.659529] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:40.723 [2024-11-29 07:38:30.661397] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:40.723 [2024-11-29 07:38:30.661905] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:40.723 [2024-11-29 07:38:30.661930] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:40.723 [2024-11-29 07:38:30.662129] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:40.723 00:06:40.723 [2024-11-29 07:38:30.662154] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:41.291 00:06:41.291 real 0m1.332s 00:06:41.291 user 0m1.075s 00:06:41.291 sys 0m0.153s 00:06:41.291 07:38:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.291 07:38:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:41.291 ************************************ 00:06:41.291 END TEST bdev_hello_world 00:06:41.291 ************************************ 00:06:41.550 07:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:41.550 07:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.550 07:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.550 07:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.550 ************************************ 00:06:41.550 START TEST bdev_bounds 00:06:41.550 ************************************ 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:41.550 Process bdevio pid: 61254 00:06:41.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61254 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61254' 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61254 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61254 ']' 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.550 07:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:41.550 [2024-11-29 07:38:31.329249] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:41.550 [2024-11-29 07:38:31.329362] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:06:41.550 [2024-11-29 07:38:31.481550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.810 [2024-11-29 07:38:31.581301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.810 [2024-11-29 07:38:31.581888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.810 [2024-11-29 07:38:31.581905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.382 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.382 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:42.382 07:38:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:42.382 I/O targets: 00:06:42.382 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:42.382 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:42.382 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:42.382 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.382 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.382 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.382 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:42.382 00:06:42.382 00:06:42.382 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.382 http://cunit.sourceforge.net/ 00:06:42.382 00:06:42.382 00:06:42.382 Suite: bdevio tests on: Nvme3n1 00:06:42.382 Test: blockdev write read block ...passed 00:06:42.382 Test: blockdev write zeroes read block ...passed 00:06:42.382 Test: blockdev write zeroes read no split ...passed 00:06:42.382 Test: blockdev write zeroes read split ...passed 00:06:42.382 Test: blockdev write zeroes read split partial ...passed 00:06:42.382 Test: blockdev reset ...[2024-11-29 07:38:32.302787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:42.382 [2024-11-29 07:38:32.306855] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:42.382 passed 00:06:42.382 Test: blockdev write read 8 blocks ...passed 00:06:42.382 Test: blockdev write read size > 128k ...passed 00:06:42.382 Test: blockdev write read invalid size ...passed 00:06:42.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.382 Test: blockdev write read max offset ...passed 00:06:42.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.382 Test: blockdev writev readv 8 blocks ...passed 00:06:42.382 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.382 Test: blockdev writev readv block ...passed 00:06:42.382 Test: blockdev writev readv size > 128k ...passed 00:06:42.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.382 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.320937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb604000 len:0x1000 00:06:42.382 [2024-11-29 07:38:32.321060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.382 passed 00:06:42.382 Test: blockdev nvme passthru rw ...passed 00:06:42.382 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:38:32.322529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.382 [2024-11-29 07:38:32.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:42.382 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:42.382 passed 00:06:42.382 Test: blockdev copy ...passed 00:06:42.382 Suite: bdevio tests on: Nvme2n3 00:06:42.382 Test: blockdev write read block ...passed 00:06:42.644 Test: blockdev write zeroes read block ...passed 00:06:42.644 Test: blockdev write zeroes read no split ...passed 00:06:42.644 Test: blockdev write zeroes read split ...passed 00:06:42.644 Test: blockdev write zeroes read split partial ...passed 00:06:42.644 Test: blockdev reset ...[2024-11-29 07:38:32.370940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.644 [2024-11-29 07:38:32.377231] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:42.644 passed 00:06:42.644 Test: blockdev write read 8 blocks ...passed 00:06:42.644 Test: blockdev write read size > 128k ...passed 00:06:42.644 Test: blockdev write read invalid size ...passed 00:06:42.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.644 Test: blockdev write read max offset ...passed 00:06:42.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.644 Test: blockdev writev readv 8 blocks ...passed 00:06:42.644 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.644 Test: blockdev writev readv block ...passed 00:06:42.644 Test: blockdev writev readv size > 128k ...passed 00:06:42.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.644 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.392159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb602000 len:0x1000 00:06:42.644 [2024-11-29 07:38:32.392252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.644 passed 00:06:42.644 Test: blockdev nvme passthru rw ...passed 00:06:42.644 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:38:32.393500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.644 passed 00:06:42.644 Test: blockdev nvme admin passthru ...[2024-11-29 07:38:32.393562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.644 passed 00:06:42.644 Test: blockdev copy ...passed 00:06:42.644 Suite: bdevio tests on: Nvme2n2 00:06:42.644 Test: blockdev write read block ...passed 00:06:42.644 Test: blockdev write zeroes read block ...passed 00:06:42.644 Test: blockdev write zeroes read no split ...passed 00:06:42.644 Test: blockdev write zeroes read split ...passed 00:06:42.644 Test: blockdev write zeroes read split partial ...passed 00:06:42.644 Test: blockdev reset ...[2024-11-29 07:38:32.450630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.644 [2024-11-29 07:38:32.454383] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:42.644 passed 00:06:42.644 Test: blockdev write read 8 blocks ...passed 00:06:42.644 Test: blockdev write read size > 128k ...passed 00:06:42.644 Test: blockdev write read invalid size ...passed 00:06:42.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.644 Test: blockdev write read max offset ...passed 00:06:42.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.644 Test: blockdev writev readv 8 blocks ...passed 00:06:42.644 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.644 Test: blockdev writev readv block ...passed 00:06:42.644 Test: blockdev writev readv size > 128k ...passed 00:06:42.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.644 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2638000 len:0x1000 00:06:42.644 [2024-11-29 07:38:32.467634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.644 passed 00:06:42.644 Test: blockdev nvme passthru rw ...passed 00:06:42.644 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:38:32.469086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.644 [2024-11-29 07:38:32.469117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.644 passed 00:06:42.644 Test: blockdev nvme admin passthru ...passed 00:06:42.644 Test: blockdev copy ...passed 00:06:42.644 Suite: bdevio tests on: Nvme2n1 00:06:42.644 Test: blockdev write read block ...passed 00:06:42.644 Test: blockdev write zeroes read block ...passed 00:06:42.644 Test: blockdev write zeroes read no split ...passed 00:06:42.644 Test: blockdev write zeroes read split ...passed 00:06:42.644 Test: blockdev write zeroes read split partial ...passed 00:06:42.644 Test: blockdev reset ...[2024-11-29 07:38:32.515294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.644 [2024-11-29 07:38:32.518371] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:42.644 passed 00:06:42.644 Test: blockdev write read 8 blocks ...passed 00:06:42.644 Test: blockdev write read size > 128k ...passed 00:06:42.644 Test: blockdev write read invalid size ...passed 00:06:42.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.644 Test: blockdev write read max offset ...passed 00:06:42.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.644 Test: blockdev writev readv 8 blocks ...passed 00:06:42.645 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.645 Test: blockdev writev readv block ...passed 00:06:42.645 Test: blockdev writev readv size > 128k ...passed 00:06:42.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.645 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.530064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2634000 len:0x1000 00:06:42.645 [2024-11-29 07:38:32.530106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.645 passed 00:06:42.645 Test: blockdev nvme passthru rw ...passed 00:06:42.645 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:38:32.531414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.645 [2024-11-29 07:38:32.531441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.645 passed 00:06:42.645 Test: blockdev nvme admin passthru ...passed 00:06:42.645 Test: blockdev copy ...passed 00:06:42.645 Suite: bdevio tests on: Nvme1n1p2 00:06:42.645 Test: blockdev write read block ...passed 00:06:42.645 Test: blockdev write zeroes read block ...passed 00:06:42.645 Test: blockdev write zeroes read no split ...passed 00:06:42.645 Test: blockdev write zeroes read split ...passed 00:06:42.645 Test: blockdev write zeroes read split partial ...passed 00:06:42.645 Test: blockdev reset ...[2024-11-29 07:38:32.583899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:42.645 [2024-11-29 07:38:32.586793] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:42.906 passed 00:06:42.906 Test: blockdev write read 8 blocks ...passed 00:06:42.906 Test: blockdev write read size > 128k ...passed 00:06:42.906 Test: blockdev write read invalid size ...passed 00:06:42.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.906 Test: blockdev write read max offset ...passed 00:06:42.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.906 Test: blockdev writev readv 8 blocks ...passed 00:06:42.906 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.906 Test: blockdev writev readv block ...passed 00:06:42.906 Test: blockdev writev readv size > 128k ...passed 00:06:42.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.906 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.593580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e2630000 len:0x1000 00:06:42.906 [2024-11-29 07:38:32.593625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.906 passed 00:06:42.906 Test: blockdev nvme passthru rw ...passed 00:06:42.906 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.906 Test: blockdev nvme admin passthru ...passed 00:06:42.906 Test: blockdev copy ...passed 00:06:42.906 Suite: bdevio tests on: Nvme1n1p1 00:06:42.906 Test: blockdev write read block ...passed 00:06:42.906 Test: blockdev write zeroes read block ...passed 00:06:42.906 Test: blockdev write zeroes read no split ...passed 00:06:42.906 Test: blockdev write zeroes read split ...passed 00:06:42.906 Test: blockdev write zeroes read split partial ...passed 00:06:42.906 Test: blockdev reset ...[2024-11-29 07:38:32.636489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:42.906 [2024-11-29 07:38:32.639850] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:42.906 passed 00:06:42.906 Test: blockdev write read 8 blocks ...passed 00:06:42.906 Test: blockdev write read size > 128k ...passed 00:06:42.906 Test: blockdev write read invalid size ...passed 00:06:42.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.906 Test: blockdev write read max offset ...passed 00:06:42.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.906 Test: blockdev writev readv 8 blocks ...passed 00:06:42.906 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.906 Test: blockdev writev readv block ...passed 00:06:42.906 Test: blockdev writev readv size > 128k ...passed 00:06:42.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.906 Test: blockdev comparev and writev ...[2024-11-29 07:38:32.646615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bb80e000 len:0x1000 00:06:42.906 [2024-11-29 07:38:32.646653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.906 passed 00:06:42.906 Test: blockdev nvme passthru rw ...passed 00:06:42.906 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.906 Test: blockdev nvme admin passthru ...passed 00:06:42.906 Test: blockdev copy ...passed 00:06:42.906 Suite: bdevio tests on: Nvme0n1 00:06:42.906 Test: blockdev write read block ...passed 00:06:42.906 Test: blockdev write zeroes read block ...passed 00:06:42.906 Test: blockdev write zeroes read no split ...passed 00:06:42.906 Test: blockdev write zeroes read split ...passed 00:06:42.906 Test: blockdev write zeroes read split partial ...passed 00:06:42.907 Test: blockdev reset ...[2024-11-29 07:38:32.695051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:42.907 [2024-11-29 07:38:32.697971] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:42.907 passed 00:06:42.907 Test: blockdev write read 8 blocks ...passed 00:06:42.907 Test: blockdev write read size > 128k ...passed 00:06:42.907 Test: blockdev write read invalid size ...passed 00:06:42.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.907 Test: blockdev write read max offset ...passed 00:06:42.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.907 Test: blockdev writev readv 8 blocks ...passed 00:06:42.907 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.907 Test: blockdev writev readv block ...passed 00:06:42.907 Test: blockdev writev readv size > 128k ...passed 00:06:42.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.907 Test: blockdev comparev and writev ...passed 00:06:42.907 Test: blockdev nvme passthru rw ...[2024-11-29 07:38:32.703799] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:42.907 separate metadata which is not supported yet. 00:06:42.907 passed 00:06:42.907 Test: blockdev nvme passthru vendor specific ...[2024-11-29 07:38:32.704282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:42.907 passed 00:06:42.907 Test: blockdev nvme admin passthru ...[2024-11-29 07:38:32.704320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:42.907 passed 00:06:42.907 Test: blockdev copy ...passed 00:06:42.907 00:06:42.907 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.907 suites 7 7 n/a 0 0 00:06:42.907 tests 161 161 161 0 0 00:06:42.907 asserts 1025 1025 1025 0 n/a 00:06:42.907 00:06:42.907 Elapsed time = 1.176 seconds 00:06:42.907 0 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61254 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61254 ']' 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61254 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61254 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.907 killing process with pid 61254 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61254' 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61254 00:06:42.907 07:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61254 00:06:43.481 07:38:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:43.481 00:06:43.481 real 0m2.151s 00:06:43.481 user 0m5.503s 00:06:43.481 sys 0m0.265s 00:06:43.481 07:38:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.481 ************************************ 00:06:43.481 END TEST bdev_bounds 00:06:43.481 ************************************ 00:06:43.481 07:38:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:43.742 07:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:43.742 07:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:43.742 07:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.742 07:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.742 ************************************ 00:06:43.742 START TEST bdev_nbd 00:06:43.742 ************************************ 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:43.742 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61312 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61312 /var/tmp/spdk-nbd.sock 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61312 ']' 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.743 07:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:43.743 [2024-11-29 07:38:33.527416] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:43.743 [2024-11-29 07:38:33.527542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.003 [2024-11-29 07:38:33.685441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.003 [2024-11-29 07:38:33.781354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.575 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:44.834 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:44.834 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.835 1+0 records in 00:06:44.835 1+0 records out 00:06:44.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106254 s, 3.9 MB/s 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.835 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.095 1+0 records in 00:06:45.095 1+0 records out 00:06:45.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154874 s, 2.6 MB/s 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:45.095 07:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.095 1+0 records in 00:06:45.095 1+0 records out 00:06:45.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580336 s, 7.1 MB/s 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:45.095 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.355 1+0 records in 00:06:45.355 1+0 records out 00:06:45.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366535 s, 11.2 MB/s 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:45.355 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.615 1+0 records in 00:06:45.615 1+0 records out 00:06:45.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453891 s, 9.0 MB/s 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:45.615 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:45.875 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:45.875 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:45.875 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:45.875 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.876 1+0 records in 00:06:45.876 1+0 records out 00:06:45.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107482 s, 3.8 MB/s 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:45.876 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.138 1+0 records in 00:06:46.138 1+0 records out 00:06:46.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369294 s, 11.1 MB/s 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:46.138 07:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd0", 00:06:46.138 "bdev_name": "Nvme0n1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd1", 00:06:46.138 "bdev_name": "Nvme1n1p1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd2", 00:06:46.138 "bdev_name": "Nvme1n1p2" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd3", 00:06:46.138 "bdev_name": "Nvme2n1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd4", 00:06:46.138 "bdev_name": "Nvme2n2" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd5", 00:06:46.138 "bdev_name": "Nvme2n3" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd6", 00:06:46.138 "bdev_name": "Nvme3n1" 00:06:46.138 } 00:06:46.138 ]' 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd0", 00:06:46.138 "bdev_name": "Nvme0n1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd1", 00:06:46.138 "bdev_name": "Nvme1n1p1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd2", 00:06:46.138 "bdev_name": "Nvme1n1p2" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd3", 00:06:46.138 "bdev_name": "Nvme2n1" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd4", 00:06:46.138 "bdev_name": "Nvme2n2" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd5", 00:06:46.138 "bdev_name": "Nvme2n3" 00:06:46.138 }, 00:06:46.138 { 00:06:46.138 "nbd_device": "/dev/nbd6", 00:06:46.138 "bdev_name": "Nvme3n1" 00:06:46.138 } 00:06:46.138 ]' 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.138 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.399 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.660 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.920 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.182 07:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.182 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.445 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.704 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.962 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:48.221 /dev/nbd0 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.221 1+0 records in 00:06:48.221 1+0 records out 00:06:48.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402692 s, 10.2 MB/s 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.221 07:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:48.221 /dev/nbd1 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.221 1+0 records in 00:06:48.221 1+0 records out 00:06:48.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425059 s, 9.6 MB/s 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.221 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:48.479 /dev/nbd10 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.479 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.480 1+0 records in 00:06:48.480 1+0 records out 00:06:48.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435996 s, 9.4 MB/s 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.480 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:48.738 /dev/nbd11 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.738 1+0 records in 00:06:48.738 1+0 records out 00:06:48.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513914 s, 8.0 MB/s 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.738 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:48.997 /dev/nbd12 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.997 1+0 records in 00:06:48.997 1+0 records out 00:06:48.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043515 s, 9.4 MB/s 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.997 07:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:49.256 /dev/nbd13 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.256 1+0 records in 00:06:49.256 1+0 records out 00:06:49.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318277 s, 12.9 MB/s 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:49.256 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:49.514 /dev/nbd14 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.514 1+0 records in 00:06:49.514 1+0 records out 00:06:49.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567527 s, 7.2 MB/s 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.514 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd0", 00:06:49.773 "bdev_name": "Nvme0n1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd1", 00:06:49.773 "bdev_name": "Nvme1n1p1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd10", 00:06:49.773 "bdev_name": "Nvme1n1p2" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd11", 00:06:49.773 "bdev_name": "Nvme2n1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd12", 00:06:49.773 "bdev_name": "Nvme2n2" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd13", 00:06:49.773 "bdev_name": "Nvme2n3" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd14", 00:06:49.773 "bdev_name": "Nvme3n1" 00:06:49.773 } 00:06:49.773 ]' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd0", 00:06:49.773 "bdev_name": "Nvme0n1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd1", 00:06:49.773 "bdev_name": "Nvme1n1p1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd10", 00:06:49.773 "bdev_name": "Nvme1n1p2" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd11", 00:06:49.773 "bdev_name": "Nvme2n1" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd12", 00:06:49.773 "bdev_name": "Nvme2n2" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd13", 00:06:49.773 "bdev_name": "Nvme2n3" 00:06:49.773 }, 00:06:49.773 { 00:06:49.773 "nbd_device": "/dev/nbd14", 00:06:49.773 "bdev_name": "Nvme3n1" 00:06:49.773 } 00:06:49.773 ]' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.773 /dev/nbd1 00:06:49.773 /dev/nbd10 00:06:49.773 /dev/nbd11 00:06:49.773 /dev/nbd12 00:06:49.773 /dev/nbd13 00:06:49.773 /dev/nbd14' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.773 /dev/nbd1 00:06:49.773 /dev/nbd10 00:06:49.773 /dev/nbd11 00:06:49.773 /dev/nbd12 00:06:49.773 /dev/nbd13 00:06:49.773 /dev/nbd14' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:49.773 256+0 records in 00:06:49.773 256+0 records out 00:06:49.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721617 s, 145 MB/s 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.773 256+0 records in 00:06:49.773 256+0 records out 00:06:49.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667832 s, 15.7 MB/s 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.773 256+0 records in 00:06:49.773 256+0 records out 00:06:49.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0732401 s, 14.3 MB/s 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.773 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:50.031 256+0 records in 00:06:50.031 256+0 records out 00:06:50.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.066249 s, 15.8 MB/s 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:50.031 256+0 records in 00:06:50.031 256+0 records out 00:06:50.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638318 s, 16.4 MB/s 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:50.031 256+0 records in 00:06:50.031 256+0 records out 00:06:50.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.062679 s, 16.7 MB/s 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:50.031 256+0 records in 00:06:50.031 256+0 records out 00:06:50.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647127 s, 16.2 MB/s 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.031 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:50.289 256+0 records in 00:06:50.289 256+0 records out 00:06:50.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0645653 s, 16.2 MB/s 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.289 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.290 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.290 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.290 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.290 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.548 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.806 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.064 07:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.322 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.580 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.581 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:51.839 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:52.098 malloc_lvol_verify 00:06:52.098 07:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:52.356 fbd14eaf-b492-4214-a5ef-6f77546b13ce 00:06:52.356 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:52.614 ebb7b196-bbe7-43a1-9884-e081af932ba5 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:52.614 /dev/nbd0 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:52.614 mke2fs 1.47.0 (5-Feb-2023) 00:06:52.614 Discarding device blocks: 0/4096 done 00:06:52.614 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:52.614 00:06:52.614 Allocating group tables: 0/1 done 00:06:52.614 Writing inode tables: 0/1 done 00:06:52.614 Creating journal (1024 blocks): done 00:06:52.614 Writing superblocks and filesystem accounting information: 0/1 done 00:06:52.614 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:52.614 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.615 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:52.615 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.615 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:52.615 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.615 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61312 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61312 ']' 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61312 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61312 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.873 killing process with pid 61312 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61312' 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61312 00:06:52.873 07:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61312 00:06:53.441 07:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:53.441 00:06:53.441 real 0m9.916s 00:06:53.441 user 0m14.265s 00:06:53.441 sys 0m3.281s 00:06:53.441 07:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.441 07:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:53.441 ************************************ 00:06:53.441 END TEST bdev_nbd 00:06:53.441 ************************************ 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:53.700 skipping fio tests on NVMe due to multi-ns failures. 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:53.700 07:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:53.700 07:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:53.700 07:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.700 07:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.700 ************************************ 00:06:53.700 START TEST bdev_verify 00:06:53.700 ************************************ 00:06:53.700 07:38:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:53.700 [2024-11-29 07:38:43.501270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:53.700 [2024-11-29 07:38:43.501385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61712 ] 00:06:53.958 [2024-11-29 07:38:43.657250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.958 [2024-11-29 07:38:43.734543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.958 [2024-11-29 07:38:43.734697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.554 Running I/O for 5 seconds... 00:06:56.507 19200.00 IOPS, 75.00 MiB/s [2024-11-29T07:38:47.823Z] 19104.00 IOPS, 74.62 MiB/s [2024-11-29T07:38:48.765Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-29T07:38:49.707Z] 22208.00 IOPS, 86.75 MiB/s [2024-11-29T07:38:49.707Z] 21376.00 IOPS, 83.50 MiB/s 00:06:59.763 Latency(us) 00:06:59.763 [2024-11-29T07:38:49.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.763 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0xbd0bd 00:06:59.763 Nvme0n1 : 5.09 1459.60 5.70 0.00 0.00 87507.58 15526.99 77030.01 00:06:59.763 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:59.763 Nvme0n1 : 5.07 1564.08 6.11 0.00 0.00 81650.13 13308.85 82272.89 00:06:59.763 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x4ff80 00:06:59.763 Nvme1n1p1 : 5.09 1459.18 5.70 0.00 0.00 87415.38 13712.15 75013.51 00:06:59.763 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:59.763 Nvme1n1p1 : 5.08 1563.10 6.11 0.00 0.00 81434.77 15426.17 79853.10 00:06:59.763 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x4ff7f 00:06:59.763 Nvme1n1p2 : 5.09 1458.73 5.70 0.00 0.00 87363.48 14014.62 71787.13 00:06:59.763 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:59.763 Nvme1n1p2 : 5.08 1562.61 6.10 0.00 0.00 81303.29 16031.11 79046.50 00:06:59.763 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x80000 00:06:59.763 Nvme2n1 : 5.09 1457.86 5.69 0.00 0.00 87249.61 15426.17 72190.42 00:06:59.763 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x80000 length 0x80000 00:06:59.763 Nvme2n1 : 5.08 1562.17 6.10 0.00 0.00 81166.49 17341.83 77836.60 00:06:59.763 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x80000 00:06:59.763 Nvme2n2 : 5.09 1457.49 5.69 0.00 0.00 87123.24 15627.82 72190.42 00:06:59.763 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x80000 length 0x80000 00:06:59.763 Nvme2n2 : 5.08 1561.73 6.10 0.00 0.00 81036.51 16837.71 75013.51 00:06:59.763 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x80000 00:06:59.763 Nvme2n3 : 5.10 1457.10 5.69 0.00 0.00 86980.86 15930.29 76223.41 00:06:59.763 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x80000 length 0x80000 00:06:59.763 Nvme2n3 : 5.08 1561.33 6.10 0.00 0.00 80938.39 14922.04 78239.90 00:06:59.763 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x0 length 0x20000 00:06:59.763 Nvme3n1 : 5.10 1456.68 5.69 0.00 0.00 86846.61 16031.11 77433.30 00:06:59.763 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.763 Verification LBA range: start 0x20000 length 0x20000 00:06:59.763 Nvme3n1 : 5.08 1560.88 6.10 0.00 0.00 80878.33 13510.50 81869.59 00:06:59.763 [2024-11-29T07:38:49.707Z] =================================================================================================================== 00:06:59.763 [2024-11-29T07:38:49.707Z] Total : 21142.55 82.59 0.00 0.00 84106.58 13308.85 82272.89 00:07:00.704 00:07:00.704 real 0m7.198s 00:07:00.704 user 0m13.519s 00:07:00.704 sys 0m0.205s 00:07:00.704 07:38:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.704 ************************************ 00:07:00.704 END TEST bdev_verify 00:07:00.704 ************************************ 00:07:00.704 07:38:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.967 07:38:50 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.967 07:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:00.967 07:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.967 07:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.967 ************************************ 00:07:00.967 START TEST bdev_verify_big_io 00:07:00.967 ************************************ 00:07:00.967 07:38:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.967 [2024-11-29 07:38:50.765002] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:00.967 [2024-11-29 07:38:50.765131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61811 ] 00:07:01.227 [2024-11-29 07:38:50.925590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.227 [2024-11-29 07:38:51.047882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.227 [2024-11-29 07:38:51.048050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.168 Running I/O for 5 seconds... 00:07:07.362 356.00 IOPS, 22.25 MiB/s [2024-11-29T07:38:58.242Z] 2431.00 IOPS, 151.94 MiB/s [2024-11-29T07:38:58.242Z] 3283.33 IOPS, 205.21 MiB/s 00:07:08.298 Latency(us) 00:07:08.298 [2024-11-29T07:38:58.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.298 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0xbd0b 00:07:08.298 Nvme0n1 : 5.88 97.94 6.12 0.00 0.00 1214921.91 19358.33 1387346.71 00:07:08.298 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:08.298 Nvme0n1 : 5.69 117.66 7.35 0.00 0.00 1046464.70 14720.39 1316366.18 00:07:08.298 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x4ff8 00:07:08.298 Nvme1n1p1 : 5.98 113.22 7.08 0.00 0.00 1036587.25 88725.66 1116330.14 00:07:08.298 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:08.298 Nvme1n1p1 : 5.83 120.47 7.53 0.00 0.00 986458.56 102437.81 1109877.37 00:07:08.298 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x4ff7 00:07:08.298 Nvme1n1p2 : 5.98 117.66 7.35 0.00 0.00 978606.58 96388.33 1109877.37 00:07:08.298 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:08.298 Nvme1n1p2 : 5.83 119.06 7.44 0.00 0.00 961680.87 102034.51 1006632.96 00:07:08.298 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x8000 00:07:08.298 Nvme2n1 : 6.08 122.74 7.67 0.00 0.00 916405.74 41136.44 1135688.47 00:07:08.298 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x8000 length 0x8000 00:07:08.298 Nvme2n1 : 5.99 119.61 7.48 0.00 0.00 934132.46 66947.54 1884210.41 00:07:08.298 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x8000 00:07:08.298 Nvme2n2 : 6.08 126.28 7.89 0.00 0.00 869831.81 52428.80 1161499.57 00:07:08.298 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x8000 length 0x8000 00:07:08.298 Nvme2n2 : 6.08 123.93 7.75 0.00 0.00 872006.79 52832.10 1910021.51 00:07:08.298 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x8000 00:07:08.298 Nvme2n3 : 6.08 126.24 7.89 0.00 0.00 840580.86 53638.70 1187310.67 00:07:08.298 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x8000 length 0x8000 00:07:08.298 Nvme2n3 : 6.13 133.36 8.33 0.00 0.00 786231.34 13308.85 1948738.17 00:07:08.298 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x0 length 0x2000 00:07:08.298 Nvme3n1 : 6.18 144.95 9.06 0.00 0.00 709961.09 894.82 1213121.77 00:07:08.298 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.298 Verification LBA range: start 0x2000 length 0x2000 00:07:08.298 Nvme3n1 : 6.18 157.85 9.87 0.00 0.00 645689.75 715.22 1974549.27 00:07:08.298 [2024-11-29T07:38:58.242Z] =================================================================================================================== 00:07:08.298 [2024-11-29T07:38:58.242Z] Total : 1740.97 108.81 0.00 0.00 896769.60 715.22 1974549.27 00:07:09.676 00:07:09.676 real 0m8.801s 00:07:09.676 user 0m16.610s 00:07:09.676 sys 0m0.268s 00:07:09.676 07:38:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.676 ************************************ 00:07:09.676 END TEST bdev_verify_big_io 00:07:09.676 ************************************ 00:07:09.676 07:38:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:09.676 07:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.676 07:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:09.676 07:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.676 07:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.676 ************************************ 00:07:09.676 START TEST bdev_write_zeroes 00:07:09.676 ************************************ 00:07:09.676 07:38:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.937 [2024-11-29 07:38:59.632000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:09.937 [2024-11-29 07:38:59.632115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61920 ] 00:07:09.937 [2024-11-29 07:38:59.791987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.199 [2024-11-29 07:38:59.890385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.769 Running I/O for 1 seconds... 00:07:11.704 64064.00 IOPS, 250.25 MiB/s 00:07:11.704 Latency(us) 00:07:11.704 [2024-11-29T07:39:01.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.704 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme0n1 : 1.03 9096.65 35.53 0.00 0.00 14040.23 7410.61 25004.50 00:07:11.704 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme1n1p1 : 1.03 9085.68 35.49 0.00 0.00 14035.50 11040.30 24802.86 00:07:11.704 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme1n1p2 : 1.03 9074.75 35.45 0.00 0.00 14014.78 10334.52 23592.96 00:07:11.704 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme2n1 : 1.03 9064.58 35.41 0.00 0.00 14006.50 10183.29 22887.19 00:07:11.704 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme2n2 : 1.03 9054.47 35.37 0.00 0.00 14003.26 10082.46 22383.06 00:07:11.704 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme2n3 : 1.03 9044.29 35.33 0.00 0.00 13990.94 9175.04 23492.14 00:07:11.704 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:11.704 Nvme3n1 : 1.03 9034.25 35.29 0.00 0.00 13977.07 8519.68 25206.15 00:07:11.704 [2024-11-29T07:39:01.648Z] =================================================================================================================== 00:07:11.704 [2024-11-29T07:39:01.648Z] Total : 63454.68 247.87 0.00 0.00 14009.75 7410.61 25206.15 00:07:12.645 00:07:12.645 real 0m2.690s 00:07:12.645 user 0m2.383s 00:07:12.645 sys 0m0.194s 00:07:12.645 ************************************ 00:07:12.645 07:39:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.645 07:39:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 END TEST bdev_write_zeroes 00:07:12.645 ************************************ 00:07:12.645 07:39:02 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.645 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:12.645 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.645 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 ************************************ 00:07:12.645 START TEST bdev_json_nonenclosed 00:07:12.645 ************************************ 00:07:12.645 07:39:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.645 [2024-11-29 07:39:02.371303] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:12.645 [2024-11-29 07:39:02.371421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61973 ] 00:07:12.645 [2024-11-29 07:39:02.531925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.904 [2024-11-29 07:39:02.626800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.904 [2024-11-29 07:39:02.626878] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:12.904 [2024-11-29 07:39:02.626895] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:12.904 [2024-11-29 07:39:02.626904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.904 00:07:12.904 real 0m0.487s 00:07:12.904 user 0m0.293s 00:07:12.904 sys 0m0.090s 00:07:12.905 07:39:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.905 ************************************ 00:07:12.905 END TEST bdev_json_nonenclosed 00:07:12.905 ************************************ 00:07:12.905 07:39:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:12.905 07:39:02 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.905 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:12.905 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.905 07:39:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 ************************************ 00:07:13.164 START TEST bdev_json_nonarray 00:07:13.164 ************************************ 00:07:13.164 07:39:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:13.164 [2024-11-29 07:39:02.916801] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:13.164 [2024-11-29 07:39:02.916913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61993 ] 00:07:13.164 [2024-11-29 07:39:03.074528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.423 [2024-11-29 07:39:03.169437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.423 [2024-11-29 07:39:03.169528] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:13.423 [2024-11-29 07:39:03.169545] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:13.423 [2024-11-29 07:39:03.169554] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.423 00:07:13.423 real 0m0.487s 00:07:13.423 user 0m0.289s 00:07:13.423 sys 0m0.095s 00:07:13.423 07:39:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.423 07:39:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:13.423 ************************************ 00:07:13.423 END TEST bdev_json_nonarray 00:07:13.423 ************************************ 00:07:13.683 07:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:13.683 07:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:13.683 07:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:13.683 07:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.683 07:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.683 07:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.683 ************************************ 00:07:13.683 START TEST bdev_gpt_uuid 00:07:13.683 ************************************ 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62024 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62024 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62024 ']' 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.683 07:39:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.683 [2024-11-29 07:39:03.472030] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:13.683 [2024-11-29 07:39:03.472126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:07:13.944 [2024-11-29 07:39:03.626933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.944 [2024-11-29 07:39:03.722008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.515 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.515 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:14.515 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.515 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.515 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:14.773 Some configs were skipped because the RPC state that can call them passed over. 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.773 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:14.773 { 00:07:14.773 "name": "Nvme1n1p1", 00:07:14.773 "aliases": [ 00:07:14.773 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:14.773 ], 00:07:14.773 "product_name": "GPT Disk", 00:07:14.773 "block_size": 4096, 00:07:14.773 "num_blocks": 655104, 00:07:14.773 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:14.773 "assigned_rate_limits": { 00:07:14.773 "rw_ios_per_sec": 0, 00:07:14.773 "rw_mbytes_per_sec": 0, 00:07:14.773 "r_mbytes_per_sec": 0, 00:07:14.773 "w_mbytes_per_sec": 0 00:07:14.773 }, 00:07:14.773 "claimed": false, 00:07:14.773 "zoned": false, 00:07:14.773 "supported_io_types": { 00:07:14.773 "read": true, 00:07:14.773 "write": true, 00:07:14.773 "unmap": true, 00:07:14.773 "flush": true, 00:07:14.773 "reset": true, 00:07:14.773 "nvme_admin": false, 00:07:14.773 "nvme_io": false, 00:07:14.773 "nvme_io_md": false, 00:07:14.773 "write_zeroes": true, 00:07:14.773 "zcopy": false, 00:07:14.773 "get_zone_info": false, 00:07:14.773 "zone_management": false, 00:07:14.773 "zone_append": false, 00:07:14.773 "compare": true, 00:07:14.773 "compare_and_write": false, 00:07:14.773 "abort": true, 00:07:14.773 "seek_hole": false, 00:07:14.773 "seek_data": false, 00:07:14.773 "copy": true, 00:07:14.773 "nvme_iov_md": false 00:07:14.773 }, 00:07:14.773 "driver_specific": { 00:07:14.773 "gpt": { 00:07:14.773 "base_bdev": "Nvme1n1", 00:07:14.773 "offset_blocks": 256, 00:07:14.774 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:14.774 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:14.774 "partition_name": "SPDK_TEST_first" 00:07:14.774 } 00:07:14.774 } 00:07:14.774 } 00:07:14.774 ]' 00:07:14.774 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:14.774 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:14.774 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.033 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:15.033 { 00:07:15.033 "name": "Nvme1n1p2", 00:07:15.033 "aliases": [ 00:07:15.033 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:15.033 ], 00:07:15.033 "product_name": "GPT Disk", 00:07:15.033 "block_size": 4096, 00:07:15.033 "num_blocks": 655103, 00:07:15.033 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:15.033 "assigned_rate_limits": { 00:07:15.033 "rw_ios_per_sec": 0, 00:07:15.033 "rw_mbytes_per_sec": 0, 00:07:15.033 "r_mbytes_per_sec": 0, 00:07:15.033 "w_mbytes_per_sec": 0 00:07:15.033 }, 00:07:15.033 "claimed": false, 00:07:15.033 "zoned": false, 00:07:15.033 "supported_io_types": { 00:07:15.033 "read": true, 00:07:15.033 "write": true, 00:07:15.033 "unmap": true, 00:07:15.033 "flush": true, 00:07:15.033 "reset": true, 00:07:15.033 "nvme_admin": false, 00:07:15.033 "nvme_io": false, 00:07:15.033 "nvme_io_md": false, 00:07:15.033 "write_zeroes": true, 00:07:15.033 "zcopy": false, 00:07:15.033 "get_zone_info": false, 00:07:15.033 "zone_management": false, 00:07:15.033 "zone_append": false, 00:07:15.033 "compare": true, 00:07:15.033 "compare_and_write": false, 00:07:15.033 "abort": true, 00:07:15.033 "seek_hole": false, 00:07:15.033 "seek_data": false, 00:07:15.034 "copy": true, 00:07:15.034 "nvme_iov_md": false 00:07:15.034 }, 00:07:15.034 "driver_specific": { 00:07:15.034 "gpt": { 00:07:15.034 "base_bdev": "Nvme1n1", 00:07:15.034 "offset_blocks": 655360, 00:07:15.034 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:15.034 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:15.034 "partition_name": "SPDK_TEST_second" 00:07:15.034 } 00:07:15.034 } 00:07:15.034 } 00:07:15.034 ]' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62024 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62024 ']' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62024 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62024 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.034 killing process with pid 62024 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62024' 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62024 00:07:15.034 07:39:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62024 00:07:16.536 00:07:16.536 real 0m2.969s 00:07:16.536 user 0m3.110s 00:07:16.536 sys 0m0.358s 00:07:16.536 07:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.536 ************************************ 00:07:16.536 END TEST bdev_gpt_uuid 00:07:16.536 ************************************ 00:07:16.536 07:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:16.536 07:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:16.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:17.054 Waiting for block devices as requested 00:07:17.054 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:17.054 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:17.314 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:17.314 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.601 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:22.601 07:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:22.601 07:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:22.601 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:22.601 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:22.601 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:22.601 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:22.601 07:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:22.601 00:07:22.601 real 0m54.221s 00:07:22.601 user 1m9.566s 00:07:22.601 sys 0m7.354s 00:07:22.601 07:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.601 07:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 ************************************ 00:07:22.601 END TEST blockdev_nvme_gpt 00:07:22.601 ************************************ 00:07:22.601 07:39:12 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:22.601 07:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.601 07:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.601 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 ************************************ 00:07:22.601 START TEST nvme 00:07:22.601 ************************************ 00:07:22.601 07:39:12 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:22.859 * Looking for test storage... 00:07:22.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:22.859 07:39:12 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.859 07:39:12 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.859 07:39:12 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.859 07:39:12 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.859 07:39:12 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.860 07:39:12 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.860 07:39:12 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.860 07:39:12 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.860 07:39:12 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.860 07:39:12 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.860 07:39:12 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:22.860 07:39:12 nvme -- scripts/common.sh@345 -- # : 1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.860 07:39:12 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.860 07:39:12 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@353 -- # local d=1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.860 07:39:12 nvme -- scripts/common.sh@355 -- # echo 1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.860 07:39:12 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@353 -- # local d=2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.860 07:39:12 nvme -- scripts/common.sh@355 -- # echo 2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.860 07:39:12 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.860 07:39:12 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.860 07:39:12 nvme -- scripts/common.sh@368 -- # return 0 00:07:22.860 07:39:12 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.860 07:39:12 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.860 --rc genhtml_branch_coverage=1 00:07:22.860 --rc genhtml_function_coverage=1 00:07:22.860 --rc genhtml_legend=1 00:07:22.860 --rc geninfo_all_blocks=1 00:07:22.860 --rc geninfo_unexecuted_blocks=1 00:07:22.860 00:07:22.860 ' 00:07:22.860 07:39:12 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.860 --rc genhtml_branch_coverage=1 00:07:22.860 --rc genhtml_function_coverage=1 00:07:22.860 --rc genhtml_legend=1 00:07:22.860 --rc geninfo_all_blocks=1 00:07:22.860 --rc geninfo_unexecuted_blocks=1 00:07:22.860 00:07:22.860 ' 00:07:22.860 07:39:12 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.860 --rc genhtml_branch_coverage=1 00:07:22.860 --rc genhtml_function_coverage=1 00:07:22.860 --rc genhtml_legend=1 00:07:22.860 --rc geninfo_all_blocks=1 00:07:22.860 --rc geninfo_unexecuted_blocks=1 00:07:22.860 00:07:22.860 ' 00:07:22.860 07:39:12 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.860 --rc genhtml_branch_coverage=1 00:07:22.860 --rc genhtml_function_coverage=1 00:07:22.860 --rc genhtml_legend=1 00:07:22.860 --rc geninfo_all_blocks=1 00:07:22.860 --rc geninfo_unexecuted_blocks=1 00:07:22.860 00:07:22.860 ' 00:07:22.860 07:39:12 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:23.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:23.686 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:23.686 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:23.686 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:23.686 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:23.686 07:39:13 nvme -- nvme/nvme.sh@79 -- # uname 00:07:23.686 07:39:13 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:23.686 07:39:13 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:23.686 07:39:13 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1075 -- # stubpid=62660 00:07:23.686 Waiting for stub to ready for secondary processes... 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62660 ]] 00:07:23.686 07:39:13 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:23.945 [2024-11-29 07:39:13.659828] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:23.945 [2024-11-29 07:39:13.659947] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:24.513 [2024-11-29 07:39:14.408705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.773 [2024-11-29 07:39:14.504374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.773 [2024-11-29 07:39:14.504771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.773 [2024-11-29 07:39:14.504874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.773 [2024-11-29 07:39:14.519222] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:24.773 [2024-11-29 07:39:14.519254] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:24.773 [2024-11-29 07:39:14.532282] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:24.773 [2024-11-29 07:39:14.532508] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:24.773 [2024-11-29 07:39:14.536919] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:24.773 [2024-11-29 07:39:14.537297] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:24.773 [2024-11-29 07:39:14.537426] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:24.773 [2024-11-29 07:39:14.541638] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:24.773 [2024-11-29 07:39:14.541781] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:24.773 [2024-11-29 07:39:14.541830] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:24.773 [2024-11-29 07:39:14.544723] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:24.773 [2024-11-29 07:39:14.544872] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:24.773 [2024-11-29 07:39:14.544922] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:24.773 [2024-11-29 07:39:14.544957] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:24.773 [2024-11-29 07:39:14.544984] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:24.773 done. 00:07:24.773 07:39:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:24.773 07:39:14 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:24.773 07:39:14 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:24.773 07:39:14 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:24.773 07:39:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.773 07:39:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 ************************************ 00:07:24.773 START TEST nvme_reset 00:07:24.773 ************************************ 00:07:24.773 07:39:14 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:25.033 Initializing NVMe Controllers 00:07:25.033 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:25.033 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:25.033 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:25.033 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:25.033 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:25.033 00:07:25.033 real 0m0.221s 00:07:25.033 user 0m0.073s 00:07:25.033 sys 0m0.104s 00:07:25.033 07:39:14 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.033 ************************************ 00:07:25.033 END TEST nvme_reset 00:07:25.033 ************************************ 00:07:25.033 07:39:14 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:25.033 07:39:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:25.033 07:39:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.033 07:39:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.033 07:39:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.033 ************************************ 00:07:25.033 START TEST nvme_identify 00:07:25.033 ************************************ 00:07:25.033 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:25.033 07:39:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:25.033 07:39:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:25.033 07:39:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:25.034 07:39:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:25.034 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:25.034 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:25.034 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:25.034 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:25.034 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:25.297 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:25.297 07:39:14 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:25.297 07:39:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:25.297 [2024-11-29 07:39:15.151393] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62681 termina===================================================== 00:07:25.297 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:25.297 ===================================================== 00:07:25.297 Controller Capabilities/Features 00:07:25.297 ================================ 00:07:25.297 Vendor ID: 1b36 00:07:25.297 Subsystem Vendor ID: 1af4 00:07:25.297 Serial Number: 12341 00:07:25.297 Model Number: QEMU NVMe Ctrl 00:07:25.297 Firmware Version: 8.0.0 00:07:25.297 Recommended Arb Burst: 6 00:07:25.297 IEEE OUI Identifier: 00 54 52 00:07:25.297 Multi-path I/O 00:07:25.297 May have multiple subsystem ports: No 00:07:25.297 May have multiple controllers: No 00:07:25.297 Associated with SR-IOV VF: No 00:07:25.297 Max Data Transfer Size: 524288 00:07:25.297 Max Number of Namespaces: 256 00:07:25.297 Max Number of I/O Queues: 64 00:07:25.297 NVMe Specification Version (VS): 1.4 00:07:25.297 NVMe Specification Version (Identify): 1.4 00:07:25.297 Maximum Queue Entries: 2048 00:07:25.297 Contiguous Queues Required: Yes 00:07:25.298 Arbitration Mechanisms Supported 00:07:25.298 Weighted Round Robin: Not Supported 00:07:25.298 Vendor Specific: Not Supported 00:07:25.298 Reset Timeout: 7500 ms 00:07:25.298 Doorbell Stride: 4 bytes 00:07:25.298 NVM Subsystem Reset: Not Supported 00:07:25.298 Command Sets Supported 00:07:25.298 NVM Command Set: Supported 00:07:25.298 Boot Partition: Not Supported 00:07:25.298 Memory Page Size Minimum: 4096 bytes 00:07:25.298 Memory Page Size Maximum: 65536 bytes 00:07:25.298 Persistent Memory Region: Not Supported 00:07:25.298 Optional Asynchronous Events Supported 00:07:25.298 Namespace Attribute Notices: Supported 00:07:25.298 Firmware Activation Notices: Not Supported 00:07:25.298 ANA Change Notices: Not Supported 00:07:25.298 PLE Aggregate Log Change Notices: Not Supported 00:07:25.298 LBA Status Info Alert Notices: Not Supported 00:07:25.298 EGE Aggregate Log Change Notices: Not Supported 00:07:25.298 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.298 Zone Descriptor Change Notices: Not Supported 00:07:25.298 Discovery Log Change Notices: Not Supported 00:07:25.298 Controller Attributes 00:07:25.298 128-bit Host Identifier: Not Supported 00:07:25.298 Non-Operational Permissive Mode: Not Supported 00:07:25.298 NVM Sets: Not Supported 00:07:25.298 Read Recovery Levels: Not Supported 00:07:25.298 Endurance Groups: Not Supported 00:07:25.298 Predictable Latency Mode: Not Supported 00:07:25.298 Traffic Based Keep ALive: Not Supported 00:07:25.298 Namespace Granularity: Not Supported 00:07:25.298 SQ Associations: Not Supported 00:07:25.298 UUID List: Not Supported 00:07:25.298 Multi-Domain Subsystem: Not Supported 00:07:25.298 Fixed Capacity Management: Not Supported 00:07:25.298 Variable Capacity Management: Not Supported 00:07:25.298 Delete Endurance Group: Not Supported 00:07:25.298 Delete NVM Set: Not Supported 00:07:25.298 Extended LBA Formats Supported: Supported 00:07:25.298 Flexible Data Placement Supported: Not Supported 00:07:25.298 00:07:25.298 Controller Memory Buffer Support 00:07:25.298 ================================ 00:07:25.298 Supported: No 00:07:25.298 00:07:25.298 Persistent Memory Region Support 00:07:25.298 ================================ 00:07:25.298 Supported: No 00:07:25.298 00:07:25.298 Admin Command Set Attributes 00:07:25.298 ============================ 00:07:25.298 Security Send/Receive: Not Supported 00:07:25.298 Format NVM: Supported 00:07:25.298 Firmware Activate/Download: Not Supported 00:07:25.298 Namespace Management: Supported 00:07:25.298 Device Self-Test: Not Supported 00:07:25.298 Directives: Supported 00:07:25.298 NVMe-MI: Not Supported 00:07:25.298 Virtualization Management: Not Supported 00:07:25.298 Doorbell Buffer Config: Supported 00:07:25.298 Get LBA Status Capability: Not Supported 00:07:25.298 Command & Feature Lockdown Capability: Not Supported 00:07:25.298 Abort Command Limit: 4 00:07:25.298 Async Event Request Limit: 4 00:07:25.298 Number of Firmware Slots: N/A 00:07:25.298 Firmware Slot 1 Read-Only: N/A 00:07:25.298 Firmware Activation Without Reset: N/A 00:07:25.298 Multiple Update Detection Support: N/A 00:07:25.298 Firmware Update Granularity: No Information Provided 00:07:25.298 Per-Namespace SMART Log: Yes 00:07:25.298 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.298 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:25.298 Command Effects Log Page: Supported 00:07:25.298 Get Log Page Extended Data: Supported 00:07:25.298 Telemetry Log Pages: Not Supported 00:07:25.298 Persistent Event Log Pages: Not Supported 00:07:25.298 Supported Log Pages Log Page: May Support 00:07:25.298 Commands Supported & Effects Log Page: Not Supported 00:07:25.298 Feature Identifiers & Effects Log Page:May Support 00:07:25.298 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.298 Data Area 4 for Telemetry Log: Not Supported 00:07:25.298 Error Log Page Entries Supported: 1 00:07:25.298 Keep Alive: Not Supported 00:07:25.298 00:07:25.298 NVM Command Set Attributes 00:07:25.298 ========================== 00:07:25.298 Submission Queue Entry Size 00:07:25.298 Max: 64 00:07:25.298 Min: 64 00:07:25.298 Completion Queue Entry Size 00:07:25.298 Max: 16 00:07:25.298 Min: 16 00:07:25.298 Number of Namespaces: 256 00:07:25.298 Compare Command: Supported 00:07:25.298 Write Uncorrectable Command: Not Supported 00:07:25.298 Dataset Management Command: Supported 00:07:25.298 Write Zeroes Command: Supported 00:07:25.298 Set Features Save Field: Supported 00:07:25.298 Reservations: Not Supported 00:07:25.298 Timestamp: Supported 00:07:25.298 Copy: Supported 00:07:25.298 Volatile Write Cache: Present 00:07:25.298 Atomic Write Unit (Normal): 1 00:07:25.298 Atomic Write Unit (PFail): 1 00:07:25.298 Atomic Compare & Write Unit: 1 00:07:25.298 Fused Compare & Write: Not Supported 00:07:25.298 Scatter-Gather List 00:07:25.298 SGL Command Set: Supported 00:07:25.298 SGL Keyed: Not Supported 00:07:25.298 SGL Bit Bucket Descriptor: Not Supported 00:07:25.298 SGL Metadata Pointer: Not Supported 00:07:25.298 Oversized SGL: Not Supported 00:07:25.298 SGL Metadata Address: Not Supported 00:07:25.298 SGL Offset: Not Supported 00:07:25.298 Transport SGL Data Block: Not Supported 00:07:25.298 Replay Protected Memory Block: Not Supported 00:07:25.298 00:07:25.298 Firmware Slot Information 00:07:25.298 ========================= 00:07:25.298 Active slot: 1 00:07:25.298 Slot 1 Firmware Revision: 1.0 00:07:25.298 00:07:25.298 00:07:25.298 Commands Supported and Effects 00:07:25.298 ============================== 00:07:25.298 Admin Commands 00:07:25.298 -------------- 00:07:25.298 Delete I/O Submission Queue (00h): Supported 00:07:25.298 Create I/O Submission Queue (01h): Supported 00:07:25.298 Get Log Page (02h): Supported 00:07:25.298 Delete I/O Completion Queue (04h): Supported 00:07:25.298 Create I/O Completion Queue (05h): Supported 00:07:25.298 Identify (06h): Supported 00:07:25.298 Abort (08h): Supported 00:07:25.298 Set Features (09h): Supported 00:07:25.298 Get Features (0Ah): Supported 00:07:25.298 Asynchronous Event Request (0Ch): Supported 00:07:25.298 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.298 Directive Send (19h): Supported 00:07:25.298 Directive Receive (1Ah): Supported 00:07:25.298 Virtualization Management (1Ch): Supported 00:07:25.298 Doorbell Buffer Config (7Ch): Supported 00:07:25.298 Format NVM (80h): Supported LBA-Change 00:07:25.298 I/O Commands 00:07:25.298 ------------ 00:07:25.298 Flush (00h): Supported LBA-Change 00:07:25.298 Write (01h): Supported LBA-Change 00:07:25.298 Read (02h): Supported 00:07:25.298 Compare (05h): Supported 00:07:25.298 Write Zeroes (08h): Supported LBA-Change 00:07:25.298 Dataset Management (09h): Supported LBA-Change 00:07:25.298 Unknown (0Ch): Supported 00:07:25.298 Unknown (12h): Supported 00:07:25.298 Copy (19h): Supported LBA-Change 00:07:25.298 Unknown (1Dh): Supported LBA-Change 00:07:25.298 00:07:25.298 Error Log 00:07:25.298 ========= 00:07:25.298 00:07:25.298 Arbitration 00:07:25.298 =========== 00:07:25.298 Arbitration Burst: no limit 00:07:25.298 00:07:25.298 Power Management 00:07:25.298 ================ 00:07:25.298 Number of Power States: 1 00:07:25.298 Current Power State: Power State #0 00:07:25.298 Power State #0: 00:07:25.298 Max Power: 25.00 W 00:07:25.298 Non-Operational State: Operational 00:07:25.298 Entry Latency: 16 microseconds 00:07:25.298 Exit Latency: 4 microseconds 00:07:25.298 Relative Read Throughput: 0 00:07:25.298 Relative Read Latency: 0 00:07:25.298 Relative Write Throughput: 0 00:07:25.298 Relative Write Latency: 0 00:07:25.298 Idle Power: Not Reported 00:07:25.298 Active Power: Not Reported 00:07:25.298 Non-Operational Permissive Mode: Not Supported 00:07:25.298 00:07:25.298 Health Information 00:07:25.298 ================== 00:07:25.298 Critical Warnings: 00:07:25.298 Available Spare Space: OK 00:07:25.298 Temperature: OK 00:07:25.298 Device Reliability: OK 00:07:25.298 Read Only: No 00:07:25.298 Volatile Memory Backup: OK 00:07:25.298 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.298 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.298 Available Spare: 0% 00:07:25.298 Available Spare Threshold: 0% 00:07:25.298 Life Percentage Used: 0% 00:07:25.298 Data Units Read: 1054 00:07:25.298 Data Units Written: 923 00:07:25.298 Host Read Commands: 54005 00:07:25.298 Host Write Commands: 52824 00:07:25.298 Controller Busy Time: 0 minutes 00:07:25.298 Power Cycles: 0 00:07:25.298 Power On Hours: 0 hours 00:07:25.298 Unsafe Shutdowns: 0 00:07:25.299 Unrecoverable Media Errors: 0 00:07:25.299 Lifetime Error Log Entries: 0 00:07:25.299 Warning Temperature Time: 0 minutes 00:07:25.299 Critical Temperature Time: 0 minutes 00:07:25.299 00:07:25.299 Number of Queues 00:07:25.299 ================ 00:07:25.299 Number of I/O Submission Queues: 64 00:07:25.299 Number of I/O Completion Queues: 64 00:07:25.299 00:07:25.299 ZNS Specific Controller Data 00:07:25.299 ============================ 00:07:25.299 Zone Append Size Limit: 0 00:07:25.299 00:07:25.299 00:07:25.299 Active Namespaces 00:07:25.299 ================= 00:07:25.299 Namespace ID:1 00:07:25.299 Error Recovery Timeout: Unlimited 00:07:25.299 Command Set Identifier: NVM (00h) 00:07:25.299 Deallocate: Supported 00:07:25.299 Deallocated/Unwritten Error: Supported 00:07:25.299 Deallocated Read Value: All 0x00 00:07:25.299 Deallocate in Write Zeroes: Not Supported 00:07:25.299 Deallocated Guard Field: 0xFFFF 00:07:25.299 Flush: Supported 00:07:25.299 Reservation: Not Supported 00:07:25.299 Namespace Sharing Capabilities: Private 00:07:25.299 Size (in LBAs): 1310720 (5GiB) 00:07:25.299 Capacity (in LBAs): 1310720 (5GiB) 00:07:25.299 Utilization (in LBAs): 1310720 (5GiB) 00:07:25.299 Thin Provisioning: Not Supported 00:07:25.299 Per-NS Atomic Units: No 00:07:25.299 Maximum Single Source Range Length: 128 00:07:25.299 Maximum Copy Length: 128 00:07:25.299 Maximum Source Range Count: 128 00:07:25.299 NGUID/EUI64 Never Reused: No 00:07:25.299 Namespace Write Protected: No 00:07:25.299 Number of LBA Formats: 8 00:07:25.299 Current LBA Format: LBA Format #04 00:07:25.299 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.299 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.299 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.299 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.299 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.299 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.299 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.299 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.299 00:07:25.299 NVM Specific Namespace Data 00:07:25.299 =========================== 00:07:25.299 Logical Block Storage Tag Mask: 0 00:07:25.299 Protection Information Capabilities: 00:07:25.299 16b Guard Protection Information Storage Tag Support: No 00:07:25.299 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.299 Storage Tag Check Read Support: No 00:07:25.299 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.299 ===================================================== 00:07:25.299 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:25.299 ===================================================== 00:07:25.299 Controller Capabilities/Features 00:07:25.299 ================================ 00:07:25.299 Vendor ID: 1b36 00:07:25.299 Subsystem Vendor ID: 1af4 00:07:25.299 Serial Number: 12343 00:07:25.299 Model Number: QEMU NVMe Ctrl 00:07:25.299 Firmware Version: 8.0.0 00:07:25.299 Recommended Arb Burst: 6 00:07:25.299 IEEE OUI Identifier: 00 54 52 00:07:25.299 Multi-path I/O 00:07:25.299 May have multiple subsystem ports: No 00:07:25.299 May have multiple controllers: Yes 00:07:25.299 Associated with SR-IOV VF: No 00:07:25.299 Max Data Transfer Size: 524288 00:07:25.299 Max Number of Namespaces: 256 00:07:25.299 Max Number of I/O Queues: 64 00:07:25.299 NVMe Specification Version (VS): 1.4 00:07:25.299 NVMe Specification Version (Identify): 1.4 00:07:25.299 Maximum Queue Entries: 2048 00:07:25.299 Contiguous Queues Required: Yes 00:07:25.299 Arbitration Mechanisms Supported 00:07:25.299 Weighted Round Robin: Not Supported 00:07:25.299 Vendor Specific: Not Supported 00:07:25.299 Reset Timeout: 7500 ms 00:07:25.299 Doorbell Stride: 4 bytes 00:07:25.299 NVM Subsystem Reset: Not Supported 00:07:25.299 Command Sets Supported 00:07:25.299 NVM Command Set: Supported 00:07:25.299 Boot Partition: Not Supported 00:07:25.299 Memory Page Size Minimum: 4096 bytes 00:07:25.299 Memory Page Size Maximum: 65536 bytes 00:07:25.299 Persistent Memory Region: Not Supported 00:07:25.299 Optional Asynchronous Events Supported 00:07:25.299 Namespace Attribute Notices: Supported 00:07:25.299 Firmware Activation Notices: Not Supported 00:07:25.299 ANA Change Notices: Not Supported 00:07:25.299 PLE Aggregate Log Change Notices: Not Supported 00:07:25.299 LBA Status Info Alert Notices: Not Supported 00:07:25.299 EGE Aggregate Log Change Notices: Not Supported 00:07:25.299 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.299 Zone Descriptor Change Notices: Not Supported 00:07:25.299 Discovery Log Change Notices: Not Supported 00:07:25.299 Controller Attributes 00:07:25.299 128-bit Host Identifier: Not Supported 00:07:25.299 Non-Operational Permissive Mode: Not Supported 00:07:25.299 NVM Sets: Not Supported 00:07:25.299 Read Recovery Levels: Not Supported 00:07:25.299 Endurance Groups: Supported 00:07:25.299 Predictable Latency Mode: Not Supported 00:07:25.299 Traffic Based Keep ALive: Not Supported 00:07:25.299 Namespace Granularity: Not Supported 00:07:25.299 SQ Associations: Not Supported 00:07:25.299 UUID List: Not Supported 00:07:25.299 Multi-Domain Subsystem: Not Supported 00:07:25.299 Fixed Capacity Management: Not Supported 00:07:25.299 Variable Capacity Management: Not Supported 00:07:25.299 Delete Endurance Group: Not Supported 00:07:25.299 Delete NVM Set: Not Supported 00:07:25.299 Extended LBA Formats Supported: Supported 00:07:25.299 Flexible Data Placement Supported: Supported 00:07:25.299 00:07:25.299 Controller Memory Buffer Support 00:07:25.299 ================================ 00:07:25.299 Supported: No 00:07:25.299 00:07:25.299 Persistent Memory Region Support 00:07:25.299 ================================ 00:07:25.299 Supported: No 00:07:25.299 00:07:25.299 Admin Command Set Attributes 00:07:25.299 ============================ 00:07:25.299 Security Send/Receive: Not Supported 00:07:25.299 Format NVM: Supported 00:07:25.299 Firmware Activate/Download: Not Supported 00:07:25.299 Namespace Management: Supported 00:07:25.299 Device Self-Test: Not Supported 00:07:25.299 Directives: Supported 00:07:25.299 NVMe-MI: Not Supported 00:07:25.299 Virtualization Management: Not Supported 00:07:25.299 Doorbell Buffer Config: Supported 00:07:25.299 Get LBA Status Capability: Not Supported 00:07:25.299 Command & Feature Lockdown Capability: Not Supported 00:07:25.299 Abort Command Limit: 4 00:07:25.299 Async Event Request Limit: 4 00:07:25.299 Number of Firmware Slots: N/A 00:07:25.299 Firmware Slot 1 Read-Only: N/A 00:07:25.299 Firmware Activation Without Reset: N/A 00:07:25.299 Multiple Update Detection Support: N/A 00:07:25.299 Firmware Update Granularity: No Information Provided 00:07:25.299 Per-Namespace SMART Log: Yes 00:07:25.299 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.299 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:25.299 Command Effects Log Page: Supported 00:07:25.299 Get Log Page Extended Data: Supported 00:07:25.299 Telemetry Log Pages: Not Supported 00:07:25.299 Persistent Event Log Pages: Not Supported 00:07:25.299 Supported Log Pages Log Page: May Support 00:07:25.299 Commands Supported & Effects Log Page: Not Supported 00:07:25.299 Feature Identifiers & Effects Log Page:May Support 00:07:25.299 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.299 Data Area 4 for Telemetry Log: Not Supported 00:07:25.299 Error Log Page Entries Supported: 1 00:07:25.299 Keep Alive: Not Supported 00:07:25.299 00:07:25.299 NVM Command Set Attributes 00:07:25.299 ========================== 00:07:25.299 Submission Queue Entry Size 00:07:25.299 Max: 64 00:07:25.299 Min: 64 00:07:25.299 Completion Queue Entry Size 00:07:25.299 Max: 16 00:07:25.299 Min: 16 00:07:25.299 Number of Namespaces: 256 00:07:25.299 Compare Command: Supported 00:07:25.299 Write Uncorrectable Command: Not Supported 00:07:25.299 Dataset Management Command: Supported 00:07:25.299 Write Zeroes Command: Supported 00:07:25.299 Set Features Save Field: Supported 00:07:25.299 Reservations: Not Supported 00:07:25.299 Timestamp: Supported 00:07:25.299 Copy: Supported 00:07:25.299 Volatile Write Cache: Present 00:07:25.299 Atomic Write Unit (Normal): 1 00:07:25.299 Atomic Write Unit (PFail): 1 00:07:25.299 Atomic Compare & Write Unit: 1 00:07:25.299 Fused Compare & Write: Not Supported 00:07:25.299 Scatter-Gather List 00:07:25.299 SGL Command Set: Supported 00:07:25.300 SGL Keyed: Not Supported 00:07:25.300 SGL Bit Bucket Descriptor: Not Supported 00:07:25.300 SGL Metadata Pointer: Not Supported 00:07:25.300 Oversized SGL: Not Supported 00:07:25.300 SGL Metadata Address: Not Supported 00:07:25.300 SGL Offset: Not Supported 00:07:25.300 Transport SGL Data Block: Not Supported 00:07:25.300 Replay Protected Memory Block: Not Supported 00:07:25.300 00:07:25.300 Firmware Slot Information 00:07:25.300 ========================= 00:07:25.300 Active slot: 1 00:07:25.300 Slot 1 Firmware Revision: 1.0 00:07:25.300 00:07:25.300 00:07:25.300 Commands Supported and Effects 00:07:25.300 ============================== 00:07:25.300 Admin Commands 00:07:25.300 -------------- 00:07:25.300 Delete I/O Submission Queue (00h): Supported 00:07:25.300 Create I/O Submission Queue (01h): Supported 00:07:25.300 Get Log Page (02h): Supported 00:07:25.300 Delete I/O Completion Queue (04h): Supported 00:07:25.300 Create I/O Completion Queue (05h): Supported 00:07:25.300 Identify (06h): Supported 00:07:25.300 Abort (08h): Supported 00:07:25.300 Set Features (09h): Supported 00:07:25.300 Get Features (0Ah): Supported 00:07:25.300 Asynchronous Event Request (0Ch): Supported 00:07:25.300 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.300 Directive Send (19h): Supported 00:07:25.300 Directive Receive (1Ah): Supported 00:07:25.300 Virtualization Management (1Ch): Supported 00:07:25.300 Doorbell Buffer Config (7Ch): Supported 00:07:25.300 Format NVM (80h): Supported LBA-Change 00:07:25.300 I/O Commands 00:07:25.300 ------------ 00:07:25.300 Flush (00h): Supported LBA-Change 00:07:25.300 Write (01h): Supported LBA-Change 00:07:25.300 Read (02h): Supported 00:07:25.300 Compare (05h): Supported 00:07:25.300 Write Zeroes (08h): Supported LBA-Change 00:07:25.300 Dataset Management (09h): Supported LBA-Change 00:07:25.300 Unknown (0Ch): Supported 00:07:25.300 Unknown (12h): Supported 00:07:25.300 Copy (19h): Supported LBA-Change 00:07:25.300 Unknown (1Dh): Supported LBA-Change 00:07:25.300 00:07:25.300 Error Log 00:07:25.300 ========= 00:07:25.300 00:07:25.300 Arbitration 00:07:25.300 =========== 00:07:25.300 Arbitration Burst: no limit 00:07:25.300 00:07:25.300 Power Management 00:07:25.300 ================ 00:07:25.300 Number of Power States: 1 00:07:25.300 Current Power State: Power State #0 00:07:25.300 Power State #0: 00:07:25.300 Max Power: 25.00 W 00:07:25.300 Non-Operational State: Operational 00:07:25.300 Entry Latency: 16 microseconds 00:07:25.300 Exit Latency: 4 microseconds 00:07:25.300 Relative Read Throughput: 0 00:07:25.300 Relative Read Latency: 0 00:07:25.300 Relative Write Throughput: 0 00:07:25.300 Relative Write Latency: 0 00:07:25.300 Idle Power: Not Reported 00:07:25.300 Active Power: Not Reported 00:07:25.300 Non-Operational Permissive Mode: Not Supported 00:07:25.300 00:07:25.300 Health Information 00:07:25.300 ================== 00:07:25.300 Critical Warnings: 00:07:25.300 Available Spare Space: OK 00:07:25.300 Temperature: OK 00:07:25.300 Device Reliability: OK 00:07:25.300 Read Only: No 00:07:25.300 Volatile Memory Backup: OK 00:07:25.300 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.300 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.300 Available Spare: 0% 00:07:25.300 Available Spare Threshold: 0% 00:07:25.300 Life Percentage Used: 0% 00:07:25.300 Data Units Read: 815 00:07:25.300 Data Units Written: 744 00:07:25.300 Host Read Commands: 37348 00:07:25.300 Host Write Commands: 36772 00:07:25.300 Controller Busy Time: 0 minutes 00:07:25.300 Power Cycles: 0 00:07:25.300 Power On Hours: 0 hours 00:07:25.300 Unsafe Shutdowns: 0 00:07:25.300 Unrecoverable Media Errors: 0 00:07:25.300 Lifetime Error Log Entries: 0 00:07:25.300 Warning Temperature Time: 0 minutes 00:07:25.300 Critical Temperature Time: 0 minutes 00:07:25.300 00:07:25.300 Number of Queues 00:07:25.300 ================ 00:07:25.300 Number of I/O Submission Queues: 64 00:07:25.300 Number of I/O Completion Queues: 64 00:07:25.300 00:07:25.300 ZNS Specific Controller Data 00:07:25.300 ============================ 00:07:25.300 Zone Append Size Limit: 0 00:07:25.300 00:07:25.300 00:07:25.300 Active Namespaces 00:07:25.300 ================= 00:07:25.300 Namespace ID:1 00:07:25.300 Error Recovery Timeout: Unlimited 00:07:25.300 Command Set Identifier: NVM (00h) 00:07:25.300 Deallocate: Supported 00:07:25.300 Deallocated/Unwritten Error: Supported 00:07:25.300 Deallocated Read Value: All 0x00 00:07:25.300 Deallocate in Write Zeroes: Not Supported 00:07:25.300 Deallocated Guard Field: 0xFFFF 00:07:25.300 Flush: Supported 00:07:25.300 Reservation: Not Supported 00:07:25.300 Namespace Sharing Capabilities: Multiple Controllers 00:07:25.300 Size (in LBAs): 262144 (1GiB) 00:07:25.300 Capacity (in LBAs): 262144 (1GiB) 00:07:25.300 Utilization (in LBAs): 262144 (1GiB) 00:07:25.300 Thin Provisioning: Not Supported 00:07:25.300 Per-NS Atomic Units: No 00:07:25.300 Maximum Single Source Range Length: 128 00:07:25.300 Maximum Copy Length: 128 00:07:25.300 Maximum Source Range Count: 128 00:07:25.300 NGUID/EUI64 Never Reused: No 00:07:25.300 Namespace Write Protected: No 00:07:25.300 Endurance group ID: 1 00:07:25.300 Number of LBA Formats: 8 00:07:25.300 Current LBA Format: LBA Format #04 00:07:25.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.300 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.300 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.300 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.300 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.300 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.300 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.300 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.300 00:07:25.300 Get Feature FDP: 00:07:25.300 ================ 00:07:25.300 Enabled: Yes 00:07:25.300 FDP configuration index: 0 00:07:25.300 00:07:25.300 FDP configurations log page 00:07:25.300 =========================== 00:07:25.300 Number of FDP configurations: 1 00:07:25.300 Version: 0 00:07:25.300 Size: 112 00:07:25.300 FDP Configuration Descriptor: 0 00:07:25.300 Descriptor Size: 96 00:07:25.300 Reclaim Group Identifier format: 2 00:07:25.300 FDP Volatile Write Cache: Not Present 00:07:25.300 FDP Configuration: Valid 00:07:25.300 Vendor Specific Size: 0 00:07:25.300 Number of Reclaim Groups: 2 00:07:25.300 Number of Recalim Unit Handles: 8 00:07:25.300 Max Placement Identifiers: 128 00:07:25.300 Number of Namespaces Suppprted: 256 00:07:25.300 Reclaim unit Nominal Size: 6000000 bytes 00:07:25.300 Estimated Reclaim Unit Time Limit: Not Reported 00:07:25.300 RUH Desc #000: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #001: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #002: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #003: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #004: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #005: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #006: RUH Type: Initially Isolated 00:07:25.300 RUH Desc #007: RUH Type: Initially Isolated 00:07:25.300 00:07:25.300 FDP reclaim unit handle usage log page 00:07:25.300 ====================================== 00:07:25.300 Number of Reclaim Unit Handles: 8 00:07:25.300 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:25.300 RUH Usage Desc #001: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #002: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #003: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #004: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #005: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #006: RUH Attributes: Unused 00:07:25.300 RUH Usage Desc #007: RUH Attributes: Unused 00:07:25.300 00:07:25.300 FDP statistics log page 00:07:25.300 ======================= 00:07:25.300 Host bytes with metadata written: 480813056 00:07:25.300 Media bytes with metadata written: 480866304 00:07:25.300 Media bytes erased: 0 00:07:25.300 00:07:25.300 FDP events log page 00:07:25.300 =================== 00:07:25.300 Number of FDP events: 0 00:07:25.300 00:07:25.300 NVM Specific Namespace Data 00:07:25.300 =========================== 00:07:25.300 Logical Block Storage Tag Mask: 0 00:07:25.300 Protection Information Capabilities: 00:07:25.300 16b Guard Protection Information Storage Tag Support: No 00:07:25.300 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.300 Storage Tag Check Read Support: No 00:07:25.300 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.300 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.300 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.300 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.300 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.301 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.301 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.301 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.301 ===================================================== 00:07:25.301 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:25.301 ===================================================== 00:07:25.301 Controller Capabilities/Features 00:07:25.301 ================================ 00:07:25.301 Vendor ID: 1b36 00:07:25.301 Subsystem Vendor ID: 1af4 00:07:25.301 Serial Number: 12340 00:07:25.301 Model Number: QEMU NVMe Ctrl 00:07:25.301 Firmware Version: 8.0.0 00:07:25.301 Recommended Arb Burst: 6 00:07:25.301 IEEE OUI Identifier: 00 54 52 00:07:25.301 Multi-path I/O 00:07:25.301 May have multiple subsystem ports: No 00:07:25.301 May have multiple controllers: No 00:07:25.301 Associated with SR-IOV VF: No 00:07:25.301 Max Data Transfer Size: 524288 00:07:25.301 Max Number of Namespaces: 256 00:07:25.301 Max Number of I/O Queues: 64 00:07:25.301 NVMe Specification Version (VS): 1.4 00:07:25.301 NVMe Specification Version (Identify): 1.4 00:07:25.301 Maximum Queue Entries: 2048 00:07:25.301 Contiguous Queues Required: Yes 00:07:25.301 Arbitration Mechanisms Supported 00:07:25.301 Weighted Round Robin: Not Supported 00:07:25.301 Vendor Specific: Not Supported 00:07:25.301 Reset Timeout: 7500 ms 00:07:25.301 Doorbell Stride: 4 bytes 00:07:25.301 NVM Subsystem Reset: Not Supported 00:07:25.301 Command Sets Supported 00:07:25.301 NVM Command Set: Supported 00:07:25.301 Boot Partition: Not Supported 00:07:25.301 Memory Page Size Minimum: 4096 bytes 00:07:25.301 Memory Page Size Maximum: 65536 bytes 00:07:25.301 Persistent Memory Region: Not Supported 00:07:25.301 Optional Asynchronous Events Supported 00:07:25.301 Namespace Attribute Notices: Supported 00:07:25.301 Firmware Activation Notices: Not Supported 00:07:25.301 ANA Change Notices: Not Supported 00:07:25.301 PLE Aggregate Log Change Notices: Not Supported 00:07:25.301 LBA Status Info Alert Notices: Not Supported 00:07:25.301 EGE Aggregate Log Change Notices: Not Supported 00:07:25.301 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.301 Zone Descriptor Change Notices: Not Supported 00:07:25.301 Discovery Log Change Notices: Not Supported 00:07:25.301 Controller Attributes 00:07:25.301 128-bit Host Identifier: Not Supported 00:07:25.301 Non-Operational Permissive Mode: Not Supported 00:07:25.301 NVM Sets: Not Supported 00:07:25.301 Read Recovery Levels: Not Supported 00:07:25.301 Endurance Groups: Not Supported 00:07:25.301 Predictable Latency Mode: Not Supported 00:07:25.301 Traffic Based Keep ALive: Not Supported 00:07:25.301 Namespace Granularity: Not Supported 00:07:25.301 SQ Associations: Not Supported 00:07:25.301 UUID List: Not Supported 00:07:25.301 Multi-Domain Subsystem: Not Supported 00:07:25.301 Fixed Capacity Management: Not Supported 00:07:25.301 Variable Capacity Management: Not Supported 00:07:25.301 Delete Endurance Group: Not Supported 00:07:25.301 Delete NVM Set: Not Supported 00:07:25.301 Extended LBA Formats Supported: Supported 00:07:25.301 Flexible Data Placement Supported: Not Supported 00:07:25.301 00:07:25.301 Controller Memory Buffer Support 00:07:25.301 ================================ 00:07:25.301 Supported: No 00:07:25.301 00:07:25.301 Persistent Memory Region Support 00:07:25.301 ================================ 00:07:25.301 Supported: No 00:07:25.301 00:07:25.301 Admin Command Set Attributes 00:07:25.301 ============================ 00:07:25.301 Security Send/Receive: Not Supported 00:07:25.301 Format NVM: Supported 00:07:25.301 Firmware Activate/Download: Not Supported 00:07:25.301 Namespace Management: Supported 00:07:25.301 Device Self-Test: Not Supported 00:07:25.301 Directives: Supported 00:07:25.301 NVMe-MI: Not Supported 00:07:25.301 Virtualization Management: Not Supported 00:07:25.301 Doorbell Buffer Config: Supported 00:07:25.301 Get LBA Status Capability: Not Supported 00:07:25.301 Command & Feature Lockdown Capability: Not Supported 00:07:25.301 Abort Command Limit: 4 00:07:25.301 Async Event Request Limit: 4 00:07:25.301 Number of Firmware Slots: N/A 00:07:25.301 Firmware Slot 1 Read-Only: N/A 00:07:25.301 Firmware Activation Without Reset: N/A 00:07:25.301 Multiple Update Detection Support: N/A 00:07:25.301 Firmware Update Granularity: No Information Provided 00:07:25.301 Per-Namespace SMART Log: Yes 00:07:25.301 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.301 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:25.301 Command Effects Log Page: Supported 00:07:25.301 Get Log Page Extended Data: Supported 00:07:25.301 Telemetry Log Pages: Not Supported 00:07:25.301 Persistent Event Log Pages: Not Supported 00:07:25.301 Supported Log Pages Log Page: May Support 00:07:25.301 Commands Supported & Effects Log Page: Not Supported 00:07:25.301 Feature Identifiers & Effects Log Page:May Support 00:07:25.301 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.301 Data Area 4 for Telemetry Log: Not Supported 00:07:25.301 Error Log Page Entries Supported: 1 00:07:25.301 Keep Alive: Not Supported 00:07:25.301 00:07:25.301 NVM Command Set Attributes 00:07:25.301 ========================== 00:07:25.301 Submission Queue Entry Size 00:07:25.301 Max: 64 00:07:25.301 Min: 64 00:07:25.301 Completion Queue Entry Size 00:07:25.301 Max: 16 00:07:25.301 Min: 16 00:07:25.301 Number of Namespaces: 256 00:07:25.301 Compare Command: Supported 00:07:25.301 Write Uncorrectable Command: Not Supported 00:07:25.301 Dataset Management Command: Supported 00:07:25.301 Write Zeroes Command: Supported 00:07:25.301 Set Features Save Field: Supported 00:07:25.301 Reservations: Not Supported 00:07:25.301 Timestamp: Supported 00:07:25.301 Copy: Supported 00:07:25.301 Volatile Write Cache: Present 00:07:25.301 Atomic Write Unit (Normal): 1 00:07:25.301 Atomic Write Unit (PFail): 1 00:07:25.301 Atomic Compare & Write Unit: 1 00:07:25.301 Fused Compare & Write: Not Supported 00:07:25.301 Scatter-Gather List 00:07:25.301 SGL Command Set: Supported 00:07:25.301 SGL Keyed: Not Supported 00:07:25.301 SGL Bit Bucket Descriptor: Not Supported 00:07:25.301 SGL Metadata Pointer: Not Supported 00:07:25.301 Oversized SGL: Not Supported 00:07:25.301 SGL Metadata Address: Not Supported 00:07:25.301 SGL Offset: Not Supported 00:07:25.301 Transport SGL Data Block: Not Supported 00:07:25.301 Replay Protected Memory Block: Not Supported 00:07:25.301 00:07:25.301 Firmware Slot Information 00:07:25.301 ========================= 00:07:25.301 Active slot: 1 00:07:25.301 Slot 1 Firmware Revision: 1.0 00:07:25.301 00:07:25.301 00:07:25.301 Commands Supported and Effects 00:07:25.301 ============================== 00:07:25.301 Admin Commands 00:07:25.301 -------------- 00:07:25.301 Delete I/O Submission Queue (00h): Supported 00:07:25.301 Create I/O Submission Queue (01h): Supported 00:07:25.301 Get Log Page (02h): Supported 00:07:25.301 Delete I/O Completion Queue (04h): Supported 00:07:25.301 Create I/O Completion Queue (05h): Supported 00:07:25.301 Identify (06h): Supported 00:07:25.301 Abort (08h): Supported 00:07:25.301 Set Features (09h): Supported 00:07:25.301 Get Features (0Ah): Supported 00:07:25.301 Asynchronous Event Request (0Ch): Supted unexpected 00:07:25.301 [2024-11-29 07:39:15.153113] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62681 terminated unexpected 00:07:25.301 [2024-11-29 07:39:15.154349] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62681 terminated unexpected 00:07:25.301 [2024-11-29 07:39:15.154877] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62681 terminated unexpected 00:07:25.301 ported 00:07:25.301 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.301 Directive Send (19h): Supported 00:07:25.301 Directive Receive (1Ah): Supported 00:07:25.301 Virtualization Management (1Ch): Supported 00:07:25.301 Doorbell Buffer Config (7Ch): Supported 00:07:25.301 Format NVM (80h): Supported LBA-Change 00:07:25.301 I/O Commands 00:07:25.301 ------------ 00:07:25.301 Flush (00h): Supported LBA-Change 00:07:25.301 Write (01h): Supported LBA-Change 00:07:25.301 Read (02h): Supported 00:07:25.301 Compare (05h): Supported 00:07:25.301 Write Zeroes (08h): Supported LBA-Change 00:07:25.301 Dataset Management (09h): Supported LBA-Change 00:07:25.301 Unknown (0Ch): Supported 00:07:25.301 Unknown (12h): Supported 00:07:25.301 Copy (19h): Supported LBA-Change 00:07:25.301 Unknown (1Dh): Supported LBA-Change 00:07:25.301 00:07:25.301 Error Log 00:07:25.301 ========= 00:07:25.301 00:07:25.301 Arbitration 00:07:25.301 =========== 00:07:25.301 Arbitration Burst: no limit 00:07:25.301 00:07:25.301 Power Management 00:07:25.301 ================ 00:07:25.301 Number of Power States: 1 00:07:25.301 Current Power State: Power State #0 00:07:25.301 Power State #0: 00:07:25.301 Max Power: 25.00 W 00:07:25.302 Non-Operational State: Operational 00:07:25.302 Entry Latency: 16 microseconds 00:07:25.302 Exit Latency: 4 microseconds 00:07:25.302 Relative Read Throughput: 0 00:07:25.302 Relative Read Latency: 0 00:07:25.302 Relative Write Throughput: 0 00:07:25.302 Relative Write Latency: 0 00:07:25.302 Idle Power: Not Reported 00:07:25.302 Active Power: Not Reported 00:07:25.302 Non-Operational Permissive Mode: Not Supported 00:07:25.302 00:07:25.302 Health Information 00:07:25.302 ================== 00:07:25.302 Critical Warnings: 00:07:25.302 Available Spare Space: OK 00:07:25.302 Temperature: OK 00:07:25.302 Device Reliability: OK 00:07:25.302 Read Only: No 00:07:25.302 Volatile Memory Backup: OK 00:07:25.302 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.302 Available Spare: 0% 00:07:25.302 Available Spare Threshold: 0% 00:07:25.302 Life Percentage Used: 0% 00:07:25.302 Data Units Read: 661 00:07:25.302 Data Units Written: 589 00:07:25.302 Host Read Commands: 36021 00:07:25.302 Host Write Commands: 35807 00:07:25.302 Controller Busy Time: 0 minutes 00:07:25.302 Power Cycles: 0 00:07:25.302 Power On Hours: 0 hours 00:07:25.302 Unsafe Shutdowns: 0 00:07:25.302 Unrecoverable Media Errors: 0 00:07:25.302 Lifetime Error Log Entries: 0 00:07:25.302 Warning Temperature Time: 0 minutes 00:07:25.302 Critical Temperature Time: 0 minutes 00:07:25.302 00:07:25.302 Number of Queues 00:07:25.302 ================ 00:07:25.302 Number of I/O Submission Queues: 64 00:07:25.302 Number of I/O Completion Queues: 64 00:07:25.302 00:07:25.302 ZNS Specific Controller Data 00:07:25.302 ============================ 00:07:25.302 Zone Append Size Limit: 0 00:07:25.302 00:07:25.302 00:07:25.302 Active Namespaces 00:07:25.302 ================= 00:07:25.302 Namespace ID:1 00:07:25.302 Error Recovery Timeout: Unlimited 00:07:25.302 Command Set Identifier: NVM (00h) 00:07:25.302 Deallocate: Supported 00:07:25.302 Deallocated/Unwritten Error: Supported 00:07:25.302 Deallocated Read Value: All 0x00 00:07:25.302 Deallocate in Write Zeroes: Not Supported 00:07:25.302 Deallocated Guard Field: 0xFFFF 00:07:25.302 Flush: Supported 00:07:25.302 Reservation: Not Supported 00:07:25.302 Metadata Transferred as: Separate Metadata Buffer 00:07:25.302 Namespace Sharing Capabilities: Private 00:07:25.302 Size (in LBAs): 1548666 (5GiB) 00:07:25.302 Capacity (in LBAs): 1548666 (5GiB) 00:07:25.302 Utilization (in LBAs): 1548666 (5GiB) 00:07:25.302 Thin Provisioning: Not Supported 00:07:25.302 Per-NS Atomic Units: No 00:07:25.302 Maximum Single Source Range Length: 128 00:07:25.302 Maximum Copy Length: 128 00:07:25.302 Maximum Source Range Count: 128 00:07:25.302 NGUID/EUI64 Never Reused: No 00:07:25.302 Namespace Write Protected: No 00:07:25.302 Number of LBA Formats: 8 00:07:25.302 Current LBA Format: LBA Format #07 00:07:25.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.302 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.302 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.302 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.302 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.302 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.302 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.302 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.302 00:07:25.302 NVM Specific Namespace Data 00:07:25.302 =========================== 00:07:25.302 Logical Block Storage Tag Mask: 0 00:07:25.302 Protection Information Capabilities: 00:07:25.302 16b Guard Protection Information Storage Tag Support: No 00:07:25.302 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.302 Storage Tag Check Read Support: No 00:07:25.302 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.302 ===================================================== 00:07:25.302 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:25.302 ===================================================== 00:07:25.302 Controller Capabilities/Features 00:07:25.302 ================================ 00:07:25.302 Vendor ID: 1b36 00:07:25.302 Subsystem Vendor ID: 1af4 00:07:25.302 Serial Number: 12342 00:07:25.302 Model Number: QEMU NVMe Ctrl 00:07:25.302 Firmware Version: 8.0.0 00:07:25.302 Recommended Arb Burst: 6 00:07:25.302 IEEE OUI Identifier: 00 54 52 00:07:25.302 Multi-path I/O 00:07:25.302 May have multiple subsystem ports: No 00:07:25.302 May have multiple controllers: No 00:07:25.302 Associated with SR-IOV VF: No 00:07:25.302 Max Data Transfer Size: 524288 00:07:25.302 Max Number of Namespaces: 256 00:07:25.302 Max Number of I/O Queues: 64 00:07:25.302 NVMe Specification Version (VS): 1.4 00:07:25.302 NVMe Specification Version (Identify): 1.4 00:07:25.302 Maximum Queue Entries: 2048 00:07:25.302 Contiguous Queues Required: Yes 00:07:25.302 Arbitration Mechanisms Supported 00:07:25.302 Weighted Round Robin: Not Supported 00:07:25.302 Vendor Specific: Not Supported 00:07:25.302 Reset Timeout: 7500 ms 00:07:25.302 Doorbell Stride: 4 bytes 00:07:25.302 NVM Subsystem Reset: Not Supported 00:07:25.302 Command Sets Supported 00:07:25.302 NVM Command Set: Supported 00:07:25.302 Boot Partition: Not Supported 00:07:25.302 Memory Page Size Minimum: 4096 bytes 00:07:25.302 Memory Page Size Maximum: 65536 bytes 00:07:25.302 Persistent Memory Region: Not Supported 00:07:25.302 Optional Asynchronous Events Supported 00:07:25.302 Namespace Attribute Notices: Supported 00:07:25.302 Firmware Activation Notices: Not Supported 00:07:25.302 ANA Change Notices: Not Supported 00:07:25.302 PLE Aggregate Log Change Notices: Not Supported 00:07:25.302 LBA Status Info Alert Notices: Not Supported 00:07:25.302 EGE Aggregate Log Change Notices: Not Supported 00:07:25.302 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.302 Zone Descriptor Change Notices: Not Supported 00:07:25.302 Discovery Log Change Notices: Not Supported 00:07:25.302 Controller Attributes 00:07:25.302 128-bit Host Identifier: Not Supported 00:07:25.302 Non-Operational Permissive Mode: Not Supported 00:07:25.302 NVM Sets: Not Supported 00:07:25.302 Read Recovery Levels: Not Supported 00:07:25.302 Endurance Groups: Not Supported 00:07:25.302 Predictable Latency Mode: Not Supported 00:07:25.302 Traffic Based Keep ALive: Not Supported 00:07:25.302 Namespace Granularity: Not Supported 00:07:25.302 SQ Associations: Not Supported 00:07:25.302 UUID List: Not Supported 00:07:25.302 Multi-Domain Subsystem: Not Supported 00:07:25.302 Fixed Capacity Management: Not Supported 00:07:25.302 Variable Capacity Management: Not Supported 00:07:25.302 Delete Endurance Group: Not Supported 00:07:25.302 Delete NVM Set: Not Supported 00:07:25.302 Extended LBA Formats Supported: Supported 00:07:25.302 Flexible Data Placement Supported: Not Supported 00:07:25.302 00:07:25.302 Controller Memory Buffer Support 00:07:25.302 ================================ 00:07:25.302 Supported: No 00:07:25.302 00:07:25.302 Persistent Memory Region Support 00:07:25.303 ================================ 00:07:25.303 Supported: No 00:07:25.303 00:07:25.303 Admin Command Set Attributes 00:07:25.303 ============================ 00:07:25.303 Security Send/Receive: Not Supported 00:07:25.303 Format NVM: Supported 00:07:25.303 Firmware Activate/Download: Not Supported 00:07:25.303 Namespace Management: Supported 00:07:25.303 Device Self-Test: Not Supported 00:07:25.303 Directives: Supported 00:07:25.303 NVMe-MI: Not Supported 00:07:25.303 Virtualization Management: Not Supported 00:07:25.303 Doorbell Buffer Config: Supported 00:07:25.303 Get LBA Status Capability: Not Supported 00:07:25.303 Command & Feature Lockdown Capability: Not Supported 00:07:25.303 Abort Command Limit: 4 00:07:25.303 Async Event Request Limit: 4 00:07:25.303 Number of Firmware Slots: N/A 00:07:25.303 Firmware Slot 1 Read-Only: N/A 00:07:25.303 Firmware Activation Without Reset: N/A 00:07:25.303 Multiple Update Detection Support: N/A 00:07:25.303 Firmware Update Granularity: No Information Provided 00:07:25.303 Per-Namespace SMART Log: Yes 00:07:25.303 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.303 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:25.303 Command Effects Log Page: Supported 00:07:25.303 Get Log Page Extended Data: Supported 00:07:25.303 Telemetry Log Pages: Not Supported 00:07:25.303 Persistent Event Log Pages: Not Supported 00:07:25.303 Supported Log Pages Log Page: May Support 00:07:25.303 Commands Supported & Effects Log Page: Not Supported 00:07:25.303 Feature Identifiers & Effects Log Page:May Support 00:07:25.303 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.303 Data Area 4 for Telemetry Log: Not Supported 00:07:25.303 Error Log Page Entries Supported: 1 00:07:25.303 Keep Alive: Not Supported 00:07:25.303 00:07:25.303 NVM Command Set Attributes 00:07:25.303 ========================== 00:07:25.303 Submission Queue Entry Size 00:07:25.303 Max: 64 00:07:25.303 Min: 64 00:07:25.303 Completion Queue Entry Size 00:07:25.303 Max: 16 00:07:25.303 Min: 16 00:07:25.303 Number of Namespaces: 256 00:07:25.303 Compare Command: Supported 00:07:25.303 Write Uncorrectable Command: Not Supported 00:07:25.303 Dataset Management Command: Supported 00:07:25.303 Write Zeroes Command: Supported 00:07:25.303 Set Features Save Field: Supported 00:07:25.303 Reservations: Not Supported 00:07:25.303 Timestamp: Supported 00:07:25.303 Copy: Supported 00:07:25.303 Volatile Write Cache: Present 00:07:25.303 Atomic Write Unit (Normal): 1 00:07:25.303 Atomic Write Unit (PFail): 1 00:07:25.303 Atomic Compare & Write Unit: 1 00:07:25.303 Fused Compare & Write: Not Supported 00:07:25.303 Scatter-Gather List 00:07:25.303 SGL Command Set: Supported 00:07:25.303 SGL Keyed: Not Supported 00:07:25.303 SGL Bit Bucket Descriptor: Not Supported 00:07:25.303 SGL Metadata Pointer: Not Supported 00:07:25.303 Oversized SGL: Not Supported 00:07:25.303 SGL Metadata Address: Not Supported 00:07:25.303 SGL Offset: Not Supported 00:07:25.303 Transport SGL Data Block: Not Supported 00:07:25.303 Replay Protected Memory Block: Not Supported 00:07:25.303 00:07:25.303 Firmware Slot Information 00:07:25.303 ========================= 00:07:25.303 Active slot: 1 00:07:25.303 Slot 1 Firmware Revision: 1.0 00:07:25.303 00:07:25.303 00:07:25.303 Commands Supported and Effects 00:07:25.303 ============================== 00:07:25.303 Admin Commands 00:07:25.303 -------------- 00:07:25.303 Delete I/O Submission Queue (00h): Supported 00:07:25.303 Create I/O Submission Queue (01h): Supported 00:07:25.303 Get Log Page (02h): Supported 00:07:25.303 Delete I/O Completion Queue (04h): Supported 00:07:25.303 Create I/O Completion Queue (05h): Supported 00:07:25.303 Identify (06h): Supported 00:07:25.303 Abort (08h): Supported 00:07:25.303 Set Features (09h): Supported 00:07:25.303 Get Features (0Ah): Supported 00:07:25.303 Asynchronous Event Request (0Ch): Supported 00:07:25.303 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.303 Directive Send (19h): Supported 00:07:25.303 Directive Receive (1Ah): Supported 00:07:25.303 Virtualization Management (1Ch): Supported 00:07:25.303 Doorbell Buffer Config (7Ch): Supported 00:07:25.303 Format NVM (80h): Supported LBA-Change 00:07:25.303 I/O Commands 00:07:25.303 ------------ 00:07:25.303 Flush (00h): Supported LBA-Change 00:07:25.303 Write (01h): Supported LBA-Change 00:07:25.303 Read (02h): Supported 00:07:25.303 Compare (05h): Supported 00:07:25.303 Write Zeroes (08h): Supported LBA-Change 00:07:25.303 Dataset Management (09h): Supported LBA-Change 00:07:25.303 Unknown (0Ch): Supported 00:07:25.303 Unknown (12h): Supported 00:07:25.303 Copy (19h): Supported LBA-Change 00:07:25.303 Unknown (1Dh): Supported LBA-Change 00:07:25.303 00:07:25.303 Error Log 00:07:25.303 ========= 00:07:25.303 00:07:25.303 Arbitration 00:07:25.303 =========== 00:07:25.303 Arbitration Burst: no limit 00:07:25.303 00:07:25.303 Power Management 00:07:25.303 ================ 00:07:25.303 Number of Power States: 1 00:07:25.303 Current Power State: Power State #0 00:07:25.303 Power State #0: 00:07:25.303 Max Power: 25.00 W 00:07:25.303 Non-Operational State: Operational 00:07:25.303 Entry Latency: 16 microseconds 00:07:25.303 Exit Latency: 4 microseconds 00:07:25.303 Relative Read Throughput: 0 00:07:25.303 Relative Read Latency: 0 00:07:25.303 Relative Write Throughput: 0 00:07:25.303 Relative Write Latency: 0 00:07:25.303 Idle Power: Not Reported 00:07:25.303 Active Power: Not Reported 00:07:25.303 Non-Operational Permissive Mode: Not Supported 00:07:25.303 00:07:25.303 Health Information 00:07:25.303 ================== 00:07:25.303 Critical Warnings: 00:07:25.303 Available Spare Space: OK 00:07:25.303 Temperature: OK 00:07:25.303 Device Reliability: OK 00:07:25.303 Read Only: No 00:07:25.303 Volatile Memory Backup: OK 00:07:25.303 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.303 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.303 Available Spare: 0% 00:07:25.303 Available Spare Threshold: 0% 00:07:25.303 Life Percentage Used: 0% 00:07:25.303 Data Units Read: 2183 00:07:25.303 Data Units Written: 1970 00:07:25.303 Host Read Commands: 109994 00:07:25.303 Host Write Commands: 108263 00:07:25.303 Controller Busy Time: 0 minutes 00:07:25.303 Power Cycles: 0 00:07:25.303 Power On Hours: 0 hours 00:07:25.303 Unsafe Shutdowns: 0 00:07:25.303 Unrecoverable Media Errors: 0 00:07:25.303 Lifetime Error Log Entries: 0 00:07:25.303 Warning Temperature Time: 0 minutes 00:07:25.303 Critical Temperature Time: 0 minutes 00:07:25.303 00:07:25.303 Number of Queues 00:07:25.303 ================ 00:07:25.303 Number of I/O Submission Queues: 64 00:07:25.303 Number of I/O Completion Queues: 64 00:07:25.303 00:07:25.303 ZNS Specific Controller Data 00:07:25.303 ============================ 00:07:25.303 Zone Append Size Limit: 0 00:07:25.303 00:07:25.303 00:07:25.303 Active Namespaces 00:07:25.303 ================= 00:07:25.303 Namespace ID:1 00:07:25.303 Error Recovery Timeout: Unlimited 00:07:25.303 Command Set Identifier: NVM (00h) 00:07:25.303 Deallocate: Supported 00:07:25.303 Deallocated/Unwritten Error: Supported 00:07:25.303 Deallocated Read Value: All 0x00 00:07:25.303 Deallocate in Write Zeroes: Not Supported 00:07:25.303 Deallocated Guard Field: 0xFFFF 00:07:25.303 Flush: Supported 00:07:25.303 Reservation: Not Supported 00:07:25.303 Namespace Sharing Capabilities: Private 00:07:25.303 Size (in LBAs): 1048576 (4GiB) 00:07:25.303 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.303 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.303 Thin Provisioning: Not Supported 00:07:25.303 Per-NS Atomic Units: No 00:07:25.303 Maximum Single Source Range Length: 128 00:07:25.303 Maximum Copy Length: 128 00:07:25.303 Maximum Source Range Count: 128 00:07:25.303 NGUID/EUI64 Never Reused: No 00:07:25.303 Namespace Write Protected: No 00:07:25.303 Number of LBA Formats: 8 00:07:25.303 Current LBA Format: LBA Format #04 00:07:25.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.303 00:07:25.303 NVM Specific Namespace Data 00:07:25.303 =========================== 00:07:25.303 Logical Block Storage Tag Mask: 0 00:07:25.303 Protection Information Capabilities: 00:07:25.303 16b Guard Protection Information Storage Tag Support: No 00:07:25.304 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.304 Storage Tag Check Read Support: No 00:07:25.304 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Namespace ID:2 00:07:25.304 Error Recovery Timeout: Unlimited 00:07:25.304 Command Set Identifier: NVM (00h) 00:07:25.304 Deallocate: Supported 00:07:25.304 Deallocated/Unwritten Error: Supported 00:07:25.304 Deallocated Read Value: All 0x00 00:07:25.304 Deallocate in Write Zeroes: Not Supported 00:07:25.304 Deallocated Guard Field: 0xFFFF 00:07:25.304 Flush: Supported 00:07:25.304 Reservation: Not Supported 00:07:25.304 Namespace Sharing Capabilities: Private 00:07:25.304 Size (in LBAs): 1048576 (4GiB) 00:07:25.304 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.304 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.304 Thin Provisioning: Not Supported 00:07:25.304 Per-NS Atomic Units: No 00:07:25.304 Maximum Single Source Range Length: 128 00:07:25.304 Maximum Copy Length: 128 00:07:25.304 Maximum Source Range Count: 128 00:07:25.304 NGUID/EUI64 Never Reused: No 00:07:25.304 Namespace Write Protected: No 00:07:25.304 Number of LBA Formats: 8 00:07:25.304 Current LBA Format: LBA Format #04 00:07:25.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.304 00:07:25.304 NVM Specific Namespace Data 00:07:25.304 =========================== 00:07:25.304 Logical Block Storage Tag Mask: 0 00:07:25.304 Protection Information Capabilities: 00:07:25.304 16b Guard Protection Information Storage Tag Support: No 00:07:25.304 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.304 Storage Tag Check Read Support: No 00:07:25.304 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Namespace ID:3 00:07:25.304 Error Recovery Timeout: Unlimited 00:07:25.304 Command Set Identifier: NVM (00h) 00:07:25.304 Deallocate: Supported 00:07:25.304 Deallocated/Unwritten Error: Supported 00:07:25.304 Deallocated Read Value: All 0x00 00:07:25.304 Deallocate in Write Zeroes: Not Supported 00:07:25.304 Deallocated Guard Field: 0xFFFF 00:07:25.304 Flush: Supported 00:07:25.304 Reservation: Not Supported 00:07:25.304 Namespace Sharing Capabilities: Private 00:07:25.304 Size (in LBAs): 1048576 (4GiB) 00:07:25.304 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.304 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.304 Thin Provisioning: Not Supported 00:07:25.304 Per-NS Atomic Units: No 00:07:25.304 Maximum Single Source Range Length: 128 00:07:25.304 Maximum Copy Length: 128 00:07:25.304 Maximum Source Range Count: 128 00:07:25.304 NGUID/EUI64 Never Reused: No 00:07:25.304 Namespace Write Protected: No 00:07:25.304 Number of LBA Formats: 8 00:07:25.304 Current LBA Format: LBA Format #04 00:07:25.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.304 00:07:25.304 NVM Specific Namespace Data 00:07:25.304 =========================== 00:07:25.304 Logical Block Storage Tag Mask: 0 00:07:25.304 Protection Information Capabilities: 00:07:25.304 16b Guard Protection Information Storage Tag Support: No 00:07:25.304 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.304 Storage Tag Check Read Support: No 00:07:25.304 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.304 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:25.304 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:25.565 ===================================================== 00:07:25.565 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:25.565 ===================================================== 00:07:25.565 Controller Capabilities/Features 00:07:25.565 ================================ 00:07:25.565 Vendor ID: 1b36 00:07:25.565 Subsystem Vendor ID: 1af4 00:07:25.565 Serial Number: 12340 00:07:25.565 Model Number: QEMU NVMe Ctrl 00:07:25.565 Firmware Version: 8.0.0 00:07:25.565 Recommended Arb Burst: 6 00:07:25.565 IEEE OUI Identifier: 00 54 52 00:07:25.565 Multi-path I/O 00:07:25.565 May have multiple subsystem ports: No 00:07:25.565 May have multiple controllers: No 00:07:25.565 Associated with SR-IOV VF: No 00:07:25.565 Max Data Transfer Size: 524288 00:07:25.565 Max Number of Namespaces: 256 00:07:25.565 Max Number of I/O Queues: 64 00:07:25.565 NVMe Specification Version (VS): 1.4 00:07:25.565 NVMe Specification Version (Identify): 1.4 00:07:25.565 Maximum Queue Entries: 2048 00:07:25.565 Contiguous Queues Required: Yes 00:07:25.565 Arbitration Mechanisms Supported 00:07:25.565 Weighted Round Robin: Not Supported 00:07:25.565 Vendor Specific: Not Supported 00:07:25.565 Reset Timeout: 7500 ms 00:07:25.565 Doorbell Stride: 4 bytes 00:07:25.565 NVM Subsystem Reset: Not Supported 00:07:25.565 Command Sets Supported 00:07:25.565 NVM Command Set: Supported 00:07:25.565 Boot Partition: Not Supported 00:07:25.565 Memory Page Size Minimum: 4096 bytes 00:07:25.565 Memory Page Size Maximum: 65536 bytes 00:07:25.565 Persistent Memory Region: Not Supported 00:07:25.565 Optional Asynchronous Events Supported 00:07:25.565 Namespace Attribute Notices: Supported 00:07:25.565 Firmware Activation Notices: Not Supported 00:07:25.565 ANA Change Notices: Not Supported 00:07:25.565 PLE Aggregate Log Change Notices: Not Supported 00:07:25.565 LBA Status Info Alert Notices: Not Supported 00:07:25.565 EGE Aggregate Log Change Notices: Not Supported 00:07:25.565 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.565 Zone Descriptor Change Notices: Not Supported 00:07:25.565 Discovery Log Change Notices: Not Supported 00:07:25.565 Controller Attributes 00:07:25.565 128-bit Host Identifier: Not Supported 00:07:25.565 Non-Operational Permissive Mode: Not Supported 00:07:25.565 NVM Sets: Not Supported 00:07:25.565 Read Recovery Levels: Not Supported 00:07:25.565 Endurance Groups: Not Supported 00:07:25.565 Predictable Latency Mode: Not Supported 00:07:25.565 Traffic Based Keep ALive: Not Supported 00:07:25.565 Namespace Granularity: Not Supported 00:07:25.565 SQ Associations: Not Supported 00:07:25.565 UUID List: Not Supported 00:07:25.565 Multi-Domain Subsystem: Not Supported 00:07:25.565 Fixed Capacity Management: Not Supported 00:07:25.565 Variable Capacity Management: Not Supported 00:07:25.565 Delete Endurance Group: Not Supported 00:07:25.565 Delete NVM Set: Not Supported 00:07:25.565 Extended LBA Formats Supported: Supported 00:07:25.565 Flexible Data Placement Supported: Not Supported 00:07:25.565 00:07:25.565 Controller Memory Buffer Support 00:07:25.565 ================================ 00:07:25.565 Supported: No 00:07:25.565 00:07:25.565 Persistent Memory Region Support 00:07:25.565 ================================ 00:07:25.565 Supported: No 00:07:25.565 00:07:25.565 Admin Command Set Attributes 00:07:25.565 ============================ 00:07:25.565 Security Send/Receive: Not Supported 00:07:25.565 Format NVM: Supported 00:07:25.565 Firmware Activate/Download: Not Supported 00:07:25.565 Namespace Management: Supported 00:07:25.565 Device Self-Test: Not Supported 00:07:25.565 Directives: Supported 00:07:25.565 NVMe-MI: Not Supported 00:07:25.565 Virtualization Management: Not Supported 00:07:25.565 Doorbell Buffer Config: Supported 00:07:25.565 Get LBA Status Capability: Not Supported 00:07:25.565 Command & Feature Lockdown Capability: Not Supported 00:07:25.565 Abort Command Limit: 4 00:07:25.565 Async Event Request Limit: 4 00:07:25.565 Number of Firmware Slots: N/A 00:07:25.565 Firmware Slot 1 Read-Only: N/A 00:07:25.565 Firmware Activation Without Reset: N/A 00:07:25.565 Multiple Update Detection Support: N/A 00:07:25.565 Firmware Update Granularity: No Information Provided 00:07:25.565 Per-Namespace SMART Log: Yes 00:07:25.565 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.565 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:25.565 Command Effects Log Page: Supported 00:07:25.565 Get Log Page Extended Data: Supported 00:07:25.565 Telemetry Log Pages: Not Supported 00:07:25.565 Persistent Event Log Pages: Not Supported 00:07:25.565 Supported Log Pages Log Page: May Support 00:07:25.565 Commands Supported & Effects Log Page: Not Supported 00:07:25.565 Feature Identifiers & Effects Log Page:May Support 00:07:25.565 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.565 Data Area 4 for Telemetry Log: Not Supported 00:07:25.565 Error Log Page Entries Supported: 1 00:07:25.565 Keep Alive: Not Supported 00:07:25.565 00:07:25.565 NVM Command Set Attributes 00:07:25.565 ========================== 00:07:25.565 Submission Queue Entry Size 00:07:25.565 Max: 64 00:07:25.565 Min: 64 00:07:25.565 Completion Queue Entry Size 00:07:25.565 Max: 16 00:07:25.565 Min: 16 00:07:25.565 Number of Namespaces: 256 00:07:25.565 Compare Command: Supported 00:07:25.565 Write Uncorrectable Command: Not Supported 00:07:25.565 Dataset Management Command: Supported 00:07:25.565 Write Zeroes Command: Supported 00:07:25.565 Set Features Save Field: Supported 00:07:25.565 Reservations: Not Supported 00:07:25.565 Timestamp: Supported 00:07:25.565 Copy: Supported 00:07:25.565 Volatile Write Cache: Present 00:07:25.565 Atomic Write Unit (Normal): 1 00:07:25.565 Atomic Write Unit (PFail): 1 00:07:25.565 Atomic Compare & Write Unit: 1 00:07:25.565 Fused Compare & Write: Not Supported 00:07:25.565 Scatter-Gather List 00:07:25.565 SGL Command Set: Supported 00:07:25.565 SGL Keyed: Not Supported 00:07:25.565 SGL Bit Bucket Descriptor: Not Supported 00:07:25.565 SGL Metadata Pointer: Not Supported 00:07:25.565 Oversized SGL: Not Supported 00:07:25.565 SGL Metadata Address: Not Supported 00:07:25.565 SGL Offset: Not Supported 00:07:25.565 Transport SGL Data Block: Not Supported 00:07:25.565 Replay Protected Memory Block: Not Supported 00:07:25.565 00:07:25.566 Firmware Slot Information 00:07:25.566 ========================= 00:07:25.566 Active slot: 1 00:07:25.566 Slot 1 Firmware Revision: 1.0 00:07:25.566 00:07:25.566 00:07:25.566 Commands Supported and Effects 00:07:25.566 ============================== 00:07:25.566 Admin Commands 00:07:25.566 -------------- 00:07:25.566 Delete I/O Submission Queue (00h): Supported 00:07:25.566 Create I/O Submission Queue (01h): Supported 00:07:25.566 Get Log Page (02h): Supported 00:07:25.566 Delete I/O Completion Queue (04h): Supported 00:07:25.566 Create I/O Completion Queue (05h): Supported 00:07:25.566 Identify (06h): Supported 00:07:25.566 Abort (08h): Supported 00:07:25.566 Set Features (09h): Supported 00:07:25.566 Get Features (0Ah): Supported 00:07:25.566 Asynchronous Event Request (0Ch): Supported 00:07:25.566 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.566 Directive Send (19h): Supported 00:07:25.566 Directive Receive (1Ah): Supported 00:07:25.566 Virtualization Management (1Ch): Supported 00:07:25.566 Doorbell Buffer Config (7Ch): Supported 00:07:25.566 Format NVM (80h): Supported LBA-Change 00:07:25.566 I/O Commands 00:07:25.566 ------------ 00:07:25.566 Flush (00h): Supported LBA-Change 00:07:25.566 Write (01h): Supported LBA-Change 00:07:25.566 Read (02h): Supported 00:07:25.566 Compare (05h): Supported 00:07:25.566 Write Zeroes (08h): Supported LBA-Change 00:07:25.566 Dataset Management (09h): Supported LBA-Change 00:07:25.566 Unknown (0Ch): Supported 00:07:25.566 Unknown (12h): Supported 00:07:25.566 Copy (19h): Supported LBA-Change 00:07:25.566 Unknown (1Dh): Supported LBA-Change 00:07:25.566 00:07:25.566 Error Log 00:07:25.566 ========= 00:07:25.566 00:07:25.566 Arbitration 00:07:25.566 =========== 00:07:25.566 Arbitration Burst: no limit 00:07:25.566 00:07:25.566 Power Management 00:07:25.566 ================ 00:07:25.566 Number of Power States: 1 00:07:25.566 Current Power State: Power State #0 00:07:25.566 Power State #0: 00:07:25.566 Max Power: 25.00 W 00:07:25.566 Non-Operational State: Operational 00:07:25.566 Entry Latency: 16 microseconds 00:07:25.566 Exit Latency: 4 microseconds 00:07:25.566 Relative Read Throughput: 0 00:07:25.566 Relative Read Latency: 0 00:07:25.566 Relative Write Throughput: 0 00:07:25.566 Relative Write Latency: 0 00:07:25.566 Idle Power: Not Reported 00:07:25.566 Active Power: Not Reported 00:07:25.566 Non-Operational Permissive Mode: Not Supported 00:07:25.566 00:07:25.566 Health Information 00:07:25.566 ================== 00:07:25.566 Critical Warnings: 00:07:25.566 Available Spare Space: OK 00:07:25.566 Temperature: OK 00:07:25.566 Device Reliability: OK 00:07:25.566 Read Only: No 00:07:25.566 Volatile Memory Backup: OK 00:07:25.566 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.566 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.566 Available Spare: 0% 00:07:25.566 Available Spare Threshold: 0% 00:07:25.566 Life Percentage Used: 0% 00:07:25.566 Data Units Read: 661 00:07:25.566 Data Units Written: 589 00:07:25.566 Host Read Commands: 36021 00:07:25.566 Host Write Commands: 35807 00:07:25.566 Controller Busy Time: 0 minutes 00:07:25.566 Power Cycles: 0 00:07:25.566 Power On Hours: 0 hours 00:07:25.566 Unsafe Shutdowns: 0 00:07:25.566 Unrecoverable Media Errors: 0 00:07:25.566 Lifetime Error Log Entries: 0 00:07:25.566 Warning Temperature Time: 0 minutes 00:07:25.566 Critical Temperature Time: 0 minutes 00:07:25.566 00:07:25.566 Number of Queues 00:07:25.566 ================ 00:07:25.566 Number of I/O Submission Queues: 64 00:07:25.566 Number of I/O Completion Queues: 64 00:07:25.566 00:07:25.566 ZNS Specific Controller Data 00:07:25.566 ============================ 00:07:25.566 Zone Append Size Limit: 0 00:07:25.566 00:07:25.566 00:07:25.566 Active Namespaces 00:07:25.566 ================= 00:07:25.566 Namespace ID:1 00:07:25.566 Error Recovery Timeout: Unlimited 00:07:25.566 Command Set Identifier: NVM (00h) 00:07:25.566 Deallocate: Supported 00:07:25.566 Deallocated/Unwritten Error: Supported 00:07:25.566 Deallocated Read Value: All 0x00 00:07:25.566 Deallocate in Write Zeroes: Not Supported 00:07:25.566 Deallocated Guard Field: 0xFFFF 00:07:25.566 Flush: Supported 00:07:25.566 Reservation: Not Supported 00:07:25.566 Metadata Transferred as: Separate Metadata Buffer 00:07:25.566 Namespace Sharing Capabilities: Private 00:07:25.566 Size (in LBAs): 1548666 (5GiB) 00:07:25.566 Capacity (in LBAs): 1548666 (5GiB) 00:07:25.566 Utilization (in LBAs): 1548666 (5GiB) 00:07:25.566 Thin Provisioning: Not Supported 00:07:25.566 Per-NS Atomic Units: No 00:07:25.566 Maximum Single Source Range Length: 128 00:07:25.566 Maximum Copy Length: 128 00:07:25.566 Maximum Source Range Count: 128 00:07:25.566 NGUID/EUI64 Never Reused: No 00:07:25.566 Namespace Write Protected: No 00:07:25.566 Number of LBA Formats: 8 00:07:25.566 Current LBA Format: LBA Format #07 00:07:25.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.566 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.566 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.566 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.566 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.566 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.566 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.566 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.566 00:07:25.566 NVM Specific Namespace Data 00:07:25.566 =========================== 00:07:25.566 Logical Block Storage Tag Mask: 0 00:07:25.566 Protection Information Capabilities: 00:07:25.566 16b Guard Protection Information Storage Tag Support: No 00:07:25.566 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.566 Storage Tag Check Read Support: No 00:07:25.566 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.566 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:25.566 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:25.828 ===================================================== 00:07:25.828 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:25.828 ===================================================== 00:07:25.828 Controller Capabilities/Features 00:07:25.828 ================================ 00:07:25.828 Vendor ID: 1b36 00:07:25.828 Subsystem Vendor ID: 1af4 00:07:25.828 Serial Number: 12341 00:07:25.828 Model Number: QEMU NVMe Ctrl 00:07:25.828 Firmware Version: 8.0.0 00:07:25.828 Recommended Arb Burst: 6 00:07:25.828 IEEE OUI Identifier: 00 54 52 00:07:25.828 Multi-path I/O 00:07:25.828 May have multiple subsystem ports: No 00:07:25.828 May have multiple controllers: No 00:07:25.828 Associated with SR-IOV VF: No 00:07:25.828 Max Data Transfer Size: 524288 00:07:25.828 Max Number of Namespaces: 256 00:07:25.828 Max Number of I/O Queues: 64 00:07:25.828 NVMe Specification Version (VS): 1.4 00:07:25.828 NVMe Specification Version (Identify): 1.4 00:07:25.828 Maximum Queue Entries: 2048 00:07:25.828 Contiguous Queues Required: Yes 00:07:25.828 Arbitration Mechanisms Supported 00:07:25.828 Weighted Round Robin: Not Supported 00:07:25.828 Vendor Specific: Not Supported 00:07:25.828 Reset Timeout: 7500 ms 00:07:25.828 Doorbell Stride: 4 bytes 00:07:25.828 NVM Subsystem Reset: Not Supported 00:07:25.828 Command Sets Supported 00:07:25.828 NVM Command Set: Supported 00:07:25.828 Boot Partition: Not Supported 00:07:25.828 Memory Page Size Minimum: 4096 bytes 00:07:25.828 Memory Page Size Maximum: 65536 bytes 00:07:25.828 Persistent Memory Region: Not Supported 00:07:25.828 Optional Asynchronous Events Supported 00:07:25.828 Namespace Attribute Notices: Supported 00:07:25.828 Firmware Activation Notices: Not Supported 00:07:25.828 ANA Change Notices: Not Supported 00:07:25.828 PLE Aggregate Log Change Notices: Not Supported 00:07:25.828 LBA Status Info Alert Notices: Not Supported 00:07:25.828 EGE Aggregate Log Change Notices: Not Supported 00:07:25.828 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.828 Zone Descriptor Change Notices: Not Supported 00:07:25.828 Discovery Log Change Notices: Not Supported 00:07:25.828 Controller Attributes 00:07:25.828 128-bit Host Identifier: Not Supported 00:07:25.828 Non-Operational Permissive Mode: Not Supported 00:07:25.828 NVM Sets: Not Supported 00:07:25.828 Read Recovery Levels: Not Supported 00:07:25.828 Endurance Groups: Not Supported 00:07:25.828 Predictable Latency Mode: Not Supported 00:07:25.828 Traffic Based Keep ALive: Not Supported 00:07:25.828 Namespace Granularity: Not Supported 00:07:25.828 SQ Associations: Not Supported 00:07:25.828 UUID List: Not Supported 00:07:25.828 Multi-Domain Subsystem: Not Supported 00:07:25.828 Fixed Capacity Management: Not Supported 00:07:25.828 Variable Capacity Management: Not Supported 00:07:25.828 Delete Endurance Group: Not Supported 00:07:25.828 Delete NVM Set: Not Supported 00:07:25.828 Extended LBA Formats Supported: Supported 00:07:25.828 Flexible Data Placement Supported: Not Supported 00:07:25.828 00:07:25.828 Controller Memory Buffer Support 00:07:25.828 ================================ 00:07:25.828 Supported: No 00:07:25.828 00:07:25.828 Persistent Memory Region Support 00:07:25.828 ================================ 00:07:25.828 Supported: No 00:07:25.828 00:07:25.828 Admin Command Set Attributes 00:07:25.828 ============================ 00:07:25.828 Security Send/Receive: Not Supported 00:07:25.828 Format NVM: Supported 00:07:25.828 Firmware Activate/Download: Not Supported 00:07:25.828 Namespace Management: Supported 00:07:25.828 Device Self-Test: Not Supported 00:07:25.828 Directives: Supported 00:07:25.828 NVMe-MI: Not Supported 00:07:25.828 Virtualization Management: Not Supported 00:07:25.828 Doorbell Buffer Config: Supported 00:07:25.828 Get LBA Status Capability: Not Supported 00:07:25.828 Command & Feature Lockdown Capability: Not Supported 00:07:25.828 Abort Command Limit: 4 00:07:25.828 Async Event Request Limit: 4 00:07:25.828 Number of Firmware Slots: N/A 00:07:25.828 Firmware Slot 1 Read-Only: N/A 00:07:25.829 Firmware Activation Without Reset: N/A 00:07:25.829 Multiple Update Detection Support: N/A 00:07:25.829 Firmware Update Granularity: No Information Provided 00:07:25.829 Per-Namespace SMART Log: Yes 00:07:25.829 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.829 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:25.829 Command Effects Log Page: Supported 00:07:25.829 Get Log Page Extended Data: Supported 00:07:25.829 Telemetry Log Pages: Not Supported 00:07:25.829 Persistent Event Log Pages: Not Supported 00:07:25.829 Supported Log Pages Log Page: May Support 00:07:25.829 Commands Supported & Effects Log Page: Not Supported 00:07:25.829 Feature Identifiers & Effects Log Page:May Support 00:07:25.829 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.829 Data Area 4 for Telemetry Log: Not Supported 00:07:25.829 Error Log Page Entries Supported: 1 00:07:25.829 Keep Alive: Not Supported 00:07:25.829 00:07:25.829 NVM Command Set Attributes 00:07:25.829 ========================== 00:07:25.829 Submission Queue Entry Size 00:07:25.829 Max: 64 00:07:25.829 Min: 64 00:07:25.829 Completion Queue Entry Size 00:07:25.829 Max: 16 00:07:25.829 Min: 16 00:07:25.829 Number of Namespaces: 256 00:07:25.829 Compare Command: Supported 00:07:25.829 Write Uncorrectable Command: Not Supported 00:07:25.829 Dataset Management Command: Supported 00:07:25.829 Write Zeroes Command: Supported 00:07:25.829 Set Features Save Field: Supported 00:07:25.829 Reservations: Not Supported 00:07:25.829 Timestamp: Supported 00:07:25.829 Copy: Supported 00:07:25.829 Volatile Write Cache: Present 00:07:25.829 Atomic Write Unit (Normal): 1 00:07:25.829 Atomic Write Unit (PFail): 1 00:07:25.829 Atomic Compare & Write Unit: 1 00:07:25.829 Fused Compare & Write: Not Supported 00:07:25.829 Scatter-Gather List 00:07:25.829 SGL Command Set: Supported 00:07:25.829 SGL Keyed: Not Supported 00:07:25.829 SGL Bit Bucket Descriptor: Not Supported 00:07:25.829 SGL Metadata Pointer: Not Supported 00:07:25.829 Oversized SGL: Not Supported 00:07:25.829 SGL Metadata Address: Not Supported 00:07:25.829 SGL Offset: Not Supported 00:07:25.829 Transport SGL Data Block: Not Supported 00:07:25.829 Replay Protected Memory Block: Not Supported 00:07:25.829 00:07:25.829 Firmware Slot Information 00:07:25.829 ========================= 00:07:25.829 Active slot: 1 00:07:25.829 Slot 1 Firmware Revision: 1.0 00:07:25.829 00:07:25.829 00:07:25.829 Commands Supported and Effects 00:07:25.829 ============================== 00:07:25.829 Admin Commands 00:07:25.829 -------------- 00:07:25.829 Delete I/O Submission Queue (00h): Supported 00:07:25.829 Create I/O Submission Queue (01h): Supported 00:07:25.829 Get Log Page (02h): Supported 00:07:25.829 Delete I/O Completion Queue (04h): Supported 00:07:25.829 Create I/O Completion Queue (05h): Supported 00:07:25.829 Identify (06h): Supported 00:07:25.829 Abort (08h): Supported 00:07:25.829 Set Features (09h): Supported 00:07:25.829 Get Features (0Ah): Supported 00:07:25.829 Asynchronous Event Request (0Ch): Supported 00:07:25.829 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.829 Directive Send (19h): Supported 00:07:25.829 Directive Receive (1Ah): Supported 00:07:25.829 Virtualization Management (1Ch): Supported 00:07:25.829 Doorbell Buffer Config (7Ch): Supported 00:07:25.829 Format NVM (80h): Supported LBA-Change 00:07:25.829 I/O Commands 00:07:25.829 ------------ 00:07:25.829 Flush (00h): Supported LBA-Change 00:07:25.829 Write (01h): Supported LBA-Change 00:07:25.829 Read (02h): Supported 00:07:25.829 Compare (05h): Supported 00:07:25.829 Write Zeroes (08h): Supported LBA-Change 00:07:25.829 Dataset Management (09h): Supported LBA-Change 00:07:25.829 Unknown (0Ch): Supported 00:07:25.829 Unknown (12h): Supported 00:07:25.829 Copy (19h): Supported LBA-Change 00:07:25.829 Unknown (1Dh): Supported LBA-Change 00:07:25.829 00:07:25.829 Error Log 00:07:25.829 ========= 00:07:25.829 00:07:25.829 Arbitration 00:07:25.829 =========== 00:07:25.829 Arbitration Burst: no limit 00:07:25.829 00:07:25.829 Power Management 00:07:25.829 ================ 00:07:25.829 Number of Power States: 1 00:07:25.829 Current Power State: Power State #0 00:07:25.829 Power State #0: 00:07:25.829 Max Power: 25.00 W 00:07:25.829 Non-Operational State: Operational 00:07:25.829 Entry Latency: 16 microseconds 00:07:25.829 Exit Latency: 4 microseconds 00:07:25.829 Relative Read Throughput: 0 00:07:25.829 Relative Read Latency: 0 00:07:25.829 Relative Write Throughput: 0 00:07:25.829 Relative Write Latency: 0 00:07:25.829 Idle Power: Not Reported 00:07:25.829 Active Power: Not Reported 00:07:25.829 Non-Operational Permissive Mode: Not Supported 00:07:25.829 00:07:25.829 Health Information 00:07:25.829 ================== 00:07:25.829 Critical Warnings: 00:07:25.829 Available Spare Space: OK 00:07:25.829 Temperature: OK 00:07:25.829 Device Reliability: OK 00:07:25.829 Read Only: No 00:07:25.829 Volatile Memory Backup: OK 00:07:25.829 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.829 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.829 Available Spare: 0% 00:07:25.829 Available Spare Threshold: 0% 00:07:25.829 Life Percentage Used: 0% 00:07:25.829 Data Units Read: 1054 00:07:25.829 Data Units Written: 923 00:07:25.829 Host Read Commands: 54005 00:07:25.829 Host Write Commands: 52824 00:07:25.829 Controller Busy Time: 0 minutes 00:07:25.829 Power Cycles: 0 00:07:25.829 Power On Hours: 0 hours 00:07:25.829 Unsafe Shutdowns: 0 00:07:25.829 Unrecoverable Media Errors: 0 00:07:25.829 Lifetime Error Log Entries: 0 00:07:25.829 Warning Temperature Time: 0 minutes 00:07:25.829 Critical Temperature Time: 0 minutes 00:07:25.829 00:07:25.829 Number of Queues 00:07:25.829 ================ 00:07:25.829 Number of I/O Submission Queues: 64 00:07:25.829 Number of I/O Completion Queues: 64 00:07:25.829 00:07:25.829 ZNS Specific Controller Data 00:07:25.829 ============================ 00:07:25.829 Zone Append Size Limit: 0 00:07:25.829 00:07:25.829 00:07:25.829 Active Namespaces 00:07:25.829 ================= 00:07:25.829 Namespace ID:1 00:07:25.829 Error Recovery Timeout: Unlimited 00:07:25.829 Command Set Identifier: NVM (00h) 00:07:25.829 Deallocate: Supported 00:07:25.829 Deallocated/Unwritten Error: Supported 00:07:25.829 Deallocated Read Value: All 0x00 00:07:25.829 Deallocate in Write Zeroes: Not Supported 00:07:25.829 Deallocated Guard Field: 0xFFFF 00:07:25.829 Flush: Supported 00:07:25.829 Reservation: Not Supported 00:07:25.829 Namespace Sharing Capabilities: Private 00:07:25.829 Size (in LBAs): 1310720 (5GiB) 00:07:25.829 Capacity (in LBAs): 1310720 (5GiB) 00:07:25.829 Utilization (in LBAs): 1310720 (5GiB) 00:07:25.829 Thin Provisioning: Not Supported 00:07:25.829 Per-NS Atomic Units: No 00:07:25.829 Maximum Single Source Range Length: 128 00:07:25.829 Maximum Copy Length: 128 00:07:25.829 Maximum Source Range Count: 128 00:07:25.829 NGUID/EUI64 Never Reused: No 00:07:25.829 Namespace Write Protected: No 00:07:25.829 Number of LBA Formats: 8 00:07:25.829 Current LBA Format: LBA Format #04 00:07:25.829 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.829 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.829 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.829 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.829 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.829 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.829 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.829 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.829 00:07:25.829 NVM Specific Namespace Data 00:07:25.829 =========================== 00:07:25.829 Logical Block Storage Tag Mask: 0 00:07:25.829 Protection Information Capabilities: 00:07:25.829 16b Guard Protection Information Storage Tag Support: No 00:07:25.829 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.829 Storage Tag Check Read Support: No 00:07:25.829 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.829 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:25.829 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:26.091 ===================================================== 00:07:26.091 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:26.091 ===================================================== 00:07:26.091 Controller Capabilities/Features 00:07:26.091 ================================ 00:07:26.091 Vendor ID: 1b36 00:07:26.091 Subsystem Vendor ID: 1af4 00:07:26.091 Serial Number: 12342 00:07:26.091 Model Number: QEMU NVMe Ctrl 00:07:26.091 Firmware Version: 8.0.0 00:07:26.091 Recommended Arb Burst: 6 00:07:26.091 IEEE OUI Identifier: 00 54 52 00:07:26.091 Multi-path I/O 00:07:26.091 May have multiple subsystem ports: No 00:07:26.091 May have multiple controllers: No 00:07:26.091 Associated with SR-IOV VF: No 00:07:26.091 Max Data Transfer Size: 524288 00:07:26.091 Max Number of Namespaces: 256 00:07:26.091 Max Number of I/O Queues: 64 00:07:26.091 NVMe Specification Version (VS): 1.4 00:07:26.091 NVMe Specification Version (Identify): 1.4 00:07:26.091 Maximum Queue Entries: 2048 00:07:26.091 Contiguous Queues Required: Yes 00:07:26.091 Arbitration Mechanisms Supported 00:07:26.091 Weighted Round Robin: Not Supported 00:07:26.091 Vendor Specific: Not Supported 00:07:26.091 Reset Timeout: 7500 ms 00:07:26.091 Doorbell Stride: 4 bytes 00:07:26.091 NVM Subsystem Reset: Not Supported 00:07:26.091 Command Sets Supported 00:07:26.091 NVM Command Set: Supported 00:07:26.091 Boot Partition: Not Supported 00:07:26.091 Memory Page Size Minimum: 4096 bytes 00:07:26.091 Memory Page Size Maximum: 65536 bytes 00:07:26.091 Persistent Memory Region: Not Supported 00:07:26.091 Optional Asynchronous Events Supported 00:07:26.091 Namespace Attribute Notices: Supported 00:07:26.091 Firmware Activation Notices: Not Supported 00:07:26.091 ANA Change Notices: Not Supported 00:07:26.091 PLE Aggregate Log Change Notices: Not Supported 00:07:26.091 LBA Status Info Alert Notices: Not Supported 00:07:26.091 EGE Aggregate Log Change Notices: Not Supported 00:07:26.091 Normal NVM Subsystem Shutdown event: Not Supported 00:07:26.091 Zone Descriptor Change Notices: Not Supported 00:07:26.091 Discovery Log Change Notices: Not Supported 00:07:26.091 Controller Attributes 00:07:26.091 128-bit Host Identifier: Not Supported 00:07:26.091 Non-Operational Permissive Mode: Not Supported 00:07:26.091 NVM Sets: Not Supported 00:07:26.091 Read Recovery Levels: Not Supported 00:07:26.091 Endurance Groups: Not Supported 00:07:26.091 Predictable Latency Mode: Not Supported 00:07:26.091 Traffic Based Keep ALive: Not Supported 00:07:26.091 Namespace Granularity: Not Supported 00:07:26.091 SQ Associations: Not Supported 00:07:26.091 UUID List: Not Supported 00:07:26.091 Multi-Domain Subsystem: Not Supported 00:07:26.091 Fixed Capacity Management: Not Supported 00:07:26.091 Variable Capacity Management: Not Supported 00:07:26.091 Delete Endurance Group: Not Supported 00:07:26.091 Delete NVM Set: Not Supported 00:07:26.091 Extended LBA Formats Supported: Supported 00:07:26.091 Flexible Data Placement Supported: Not Supported 00:07:26.091 00:07:26.091 Controller Memory Buffer Support 00:07:26.091 ================================ 00:07:26.091 Supported: No 00:07:26.091 00:07:26.091 Persistent Memory Region Support 00:07:26.091 ================================ 00:07:26.091 Supported: No 00:07:26.091 00:07:26.091 Admin Command Set Attributes 00:07:26.091 ============================ 00:07:26.091 Security Send/Receive: Not Supported 00:07:26.091 Format NVM: Supported 00:07:26.091 Firmware Activate/Download: Not Supported 00:07:26.091 Namespace Management: Supported 00:07:26.091 Device Self-Test: Not Supported 00:07:26.091 Directives: Supported 00:07:26.091 NVMe-MI: Not Supported 00:07:26.091 Virtualization Management: Not Supported 00:07:26.091 Doorbell Buffer Config: Supported 00:07:26.091 Get LBA Status Capability: Not Supported 00:07:26.091 Command & Feature Lockdown Capability: Not Supported 00:07:26.091 Abort Command Limit: 4 00:07:26.091 Async Event Request Limit: 4 00:07:26.091 Number of Firmware Slots: N/A 00:07:26.091 Firmware Slot 1 Read-Only: N/A 00:07:26.091 Firmware Activation Without Reset: N/A 00:07:26.091 Multiple Update Detection Support: N/A 00:07:26.091 Firmware Update Granularity: No Information Provided 00:07:26.091 Per-Namespace SMART Log: Yes 00:07:26.091 Asymmetric Namespace Access Log Page: Not Supported 00:07:26.091 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:26.091 Command Effects Log Page: Supported 00:07:26.091 Get Log Page Extended Data: Supported 00:07:26.091 Telemetry Log Pages: Not Supported 00:07:26.091 Persistent Event Log Pages: Not Supported 00:07:26.091 Supported Log Pages Log Page: May Support 00:07:26.091 Commands Supported & Effects Log Page: Not Supported 00:07:26.091 Feature Identifiers & Effects Log Page:May Support 00:07:26.091 NVMe-MI Commands & Effects Log Page: May Support 00:07:26.091 Data Area 4 for Telemetry Log: Not Supported 00:07:26.091 Error Log Page Entries Supported: 1 00:07:26.091 Keep Alive: Not Supported 00:07:26.091 00:07:26.091 NVM Command Set Attributes 00:07:26.091 ========================== 00:07:26.091 Submission Queue Entry Size 00:07:26.091 Max: 64 00:07:26.091 Min: 64 00:07:26.091 Completion Queue Entry Size 00:07:26.091 Max: 16 00:07:26.091 Min: 16 00:07:26.091 Number of Namespaces: 256 00:07:26.091 Compare Command: Supported 00:07:26.091 Write Uncorrectable Command: Not Supported 00:07:26.091 Dataset Management Command: Supported 00:07:26.091 Write Zeroes Command: Supported 00:07:26.091 Set Features Save Field: Supported 00:07:26.091 Reservations: Not Supported 00:07:26.091 Timestamp: Supported 00:07:26.091 Copy: Supported 00:07:26.091 Volatile Write Cache: Present 00:07:26.091 Atomic Write Unit (Normal): 1 00:07:26.091 Atomic Write Unit (PFail): 1 00:07:26.091 Atomic Compare & Write Unit: 1 00:07:26.091 Fused Compare & Write: Not Supported 00:07:26.091 Scatter-Gather List 00:07:26.091 SGL Command Set: Supported 00:07:26.091 SGL Keyed: Not Supported 00:07:26.091 SGL Bit Bucket Descriptor: Not Supported 00:07:26.091 SGL Metadata Pointer: Not Supported 00:07:26.091 Oversized SGL: Not Supported 00:07:26.091 SGL Metadata Address: Not Supported 00:07:26.091 SGL Offset: Not Supported 00:07:26.091 Transport SGL Data Block: Not Supported 00:07:26.091 Replay Protected Memory Block: Not Supported 00:07:26.091 00:07:26.091 Firmware Slot Information 00:07:26.091 ========================= 00:07:26.091 Active slot: 1 00:07:26.091 Slot 1 Firmware Revision: 1.0 00:07:26.091 00:07:26.091 00:07:26.091 Commands Supported and Effects 00:07:26.091 ============================== 00:07:26.091 Admin Commands 00:07:26.091 -------------- 00:07:26.091 Delete I/O Submission Queue (00h): Supported 00:07:26.091 Create I/O Submission Queue (01h): Supported 00:07:26.092 Get Log Page (02h): Supported 00:07:26.092 Delete I/O Completion Queue (04h): Supported 00:07:26.092 Create I/O Completion Queue (05h): Supported 00:07:26.092 Identify (06h): Supported 00:07:26.092 Abort (08h): Supported 00:07:26.092 Set Features (09h): Supported 00:07:26.092 Get Features (0Ah): Supported 00:07:26.092 Asynchronous Event Request (0Ch): Supported 00:07:26.092 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:26.092 Directive Send (19h): Supported 00:07:26.092 Directive Receive (1Ah): Supported 00:07:26.092 Virtualization Management (1Ch): Supported 00:07:26.092 Doorbell Buffer Config (7Ch): Supported 00:07:26.092 Format NVM (80h): Supported LBA-Change 00:07:26.092 I/O Commands 00:07:26.092 ------------ 00:07:26.092 Flush (00h): Supported LBA-Change 00:07:26.092 Write (01h): Supported LBA-Change 00:07:26.092 Read (02h): Supported 00:07:26.092 Compare (05h): Supported 00:07:26.092 Write Zeroes (08h): Supported LBA-Change 00:07:26.092 Dataset Management (09h): Supported LBA-Change 00:07:26.092 Unknown (0Ch): Supported 00:07:26.092 Unknown (12h): Supported 00:07:26.092 Copy (19h): Supported LBA-Change 00:07:26.092 Unknown (1Dh): Supported LBA-Change 00:07:26.092 00:07:26.092 Error Log 00:07:26.092 ========= 00:07:26.092 00:07:26.092 Arbitration 00:07:26.092 =========== 00:07:26.092 Arbitration Burst: no limit 00:07:26.092 00:07:26.092 Power Management 00:07:26.092 ================ 00:07:26.092 Number of Power States: 1 00:07:26.092 Current Power State: Power State #0 00:07:26.092 Power State #0: 00:07:26.092 Max Power: 25.00 W 00:07:26.092 Non-Operational State: Operational 00:07:26.092 Entry Latency: 16 microseconds 00:07:26.092 Exit Latency: 4 microseconds 00:07:26.092 Relative Read Throughput: 0 00:07:26.092 Relative Read Latency: 0 00:07:26.092 Relative Write Throughput: 0 00:07:26.092 Relative Write Latency: 0 00:07:26.092 Idle Power: Not Reported 00:07:26.092 Active Power: Not Reported 00:07:26.092 Non-Operational Permissive Mode: Not Supported 00:07:26.092 00:07:26.092 Health Information 00:07:26.092 ================== 00:07:26.092 Critical Warnings: 00:07:26.092 Available Spare Space: OK 00:07:26.092 Temperature: OK 00:07:26.092 Device Reliability: OK 00:07:26.092 Read Only: No 00:07:26.092 Volatile Memory Backup: OK 00:07:26.092 Current Temperature: 323 Kelvin (50 Celsius) 00:07:26.092 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:26.092 Available Spare: 0% 00:07:26.092 Available Spare Threshold: 0% 00:07:26.092 Life Percentage Used: 0% 00:07:26.092 Data Units Read: 2183 00:07:26.092 Data Units Written: 1970 00:07:26.092 Host Read Commands: 109994 00:07:26.092 Host Write Commands: 108263 00:07:26.092 Controller Busy Time: 0 minutes 00:07:26.092 Power Cycles: 0 00:07:26.092 Power On Hours: 0 hours 00:07:26.092 Unsafe Shutdowns: 0 00:07:26.092 Unrecoverable Media Errors: 0 00:07:26.092 Lifetime Error Log Entries: 0 00:07:26.092 Warning Temperature Time: 0 minutes 00:07:26.092 Critical Temperature Time: 0 minutes 00:07:26.092 00:07:26.092 Number of Queues 00:07:26.092 ================ 00:07:26.092 Number of I/O Submission Queues: 64 00:07:26.092 Number of I/O Completion Queues: 64 00:07:26.092 00:07:26.092 ZNS Specific Controller Data 00:07:26.092 ============================ 00:07:26.092 Zone Append Size Limit: 0 00:07:26.092 00:07:26.092 00:07:26.092 Active Namespaces 00:07:26.092 ================= 00:07:26.092 Namespace ID:1 00:07:26.092 Error Recovery Timeout: Unlimited 00:07:26.092 Command Set Identifier: NVM (00h) 00:07:26.092 Deallocate: Supported 00:07:26.092 Deallocated/Unwritten Error: Supported 00:07:26.092 Deallocated Read Value: All 0x00 00:07:26.092 Deallocate in Write Zeroes: Not Supported 00:07:26.092 Deallocated Guard Field: 0xFFFF 00:07:26.092 Flush: Supported 00:07:26.092 Reservation: Not Supported 00:07:26.092 Namespace Sharing Capabilities: Private 00:07:26.092 Size (in LBAs): 1048576 (4GiB) 00:07:26.092 Capacity (in LBAs): 1048576 (4GiB) 00:07:26.092 Utilization (in LBAs): 1048576 (4GiB) 00:07:26.092 Thin Provisioning: Not Supported 00:07:26.092 Per-NS Atomic Units: No 00:07:26.092 Maximum Single Source Range Length: 128 00:07:26.092 Maximum Copy Length: 128 00:07:26.092 Maximum Source Range Count: 128 00:07:26.092 NGUID/EUI64 Never Reused: No 00:07:26.092 Namespace Write Protected: No 00:07:26.092 Number of LBA Formats: 8 00:07:26.092 Current LBA Format: LBA Format #04 00:07:26.092 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:26.092 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:26.092 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:26.092 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:26.092 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:26.092 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:26.092 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:26.092 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:26.092 00:07:26.092 NVM Specific Namespace Data 00:07:26.092 =========================== 00:07:26.092 Logical Block Storage Tag Mask: 0 00:07:26.092 Protection Information Capabilities: 00:07:26.092 16b Guard Protection Information Storage Tag Support: No 00:07:26.092 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:26.092 Storage Tag Check Read Support: No 00:07:26.092 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Namespace ID:2 00:07:26.092 Error Recovery Timeout: Unlimited 00:07:26.092 Command Set Identifier: NVM (00h) 00:07:26.092 Deallocate: Supported 00:07:26.092 Deallocated/Unwritten Error: Supported 00:07:26.092 Deallocated Read Value: All 0x00 00:07:26.092 Deallocate in Write Zeroes: Not Supported 00:07:26.092 Deallocated Guard Field: 0xFFFF 00:07:26.092 Flush: Supported 00:07:26.092 Reservation: Not Supported 00:07:26.092 Namespace Sharing Capabilities: Private 00:07:26.092 Size (in LBAs): 1048576 (4GiB) 00:07:26.092 Capacity (in LBAs): 1048576 (4GiB) 00:07:26.092 Utilization (in LBAs): 1048576 (4GiB) 00:07:26.092 Thin Provisioning: Not Supported 00:07:26.092 Per-NS Atomic Units: No 00:07:26.092 Maximum Single Source Range Length: 128 00:07:26.092 Maximum Copy Length: 128 00:07:26.092 Maximum Source Range Count: 128 00:07:26.092 NGUID/EUI64 Never Reused: No 00:07:26.092 Namespace Write Protected: No 00:07:26.092 Number of LBA Formats: 8 00:07:26.092 Current LBA Format: LBA Format #04 00:07:26.092 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:26.092 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:26.092 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:26.092 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:26.092 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:26.092 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:26.092 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:26.092 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:26.092 00:07:26.092 NVM Specific Namespace Data 00:07:26.092 =========================== 00:07:26.092 Logical Block Storage Tag Mask: 0 00:07:26.092 Protection Information Capabilities: 00:07:26.092 16b Guard Protection Information Storage Tag Support: No 00:07:26.092 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:26.092 Storage Tag Check Read Support: No 00:07:26.092 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.092 Namespace ID:3 00:07:26.093 Error Recovery Timeout: Unlimited 00:07:26.093 Command Set Identifier: NVM (00h) 00:07:26.093 Deallocate: Supported 00:07:26.093 Deallocated/Unwritten Error: Supported 00:07:26.093 Deallocated Read Value: All 0x00 00:07:26.093 Deallocate in Write Zeroes: Not Supported 00:07:26.093 Deallocated Guard Field: 0xFFFF 00:07:26.093 Flush: Supported 00:07:26.093 Reservation: Not Supported 00:07:26.093 Namespace Sharing Capabilities: Private 00:07:26.093 Size (in LBAs): 1048576 (4GiB) 00:07:26.093 Capacity (in LBAs): 1048576 (4GiB) 00:07:26.093 Utilization (in LBAs): 1048576 (4GiB) 00:07:26.093 Thin Provisioning: Not Supported 00:07:26.093 Per-NS Atomic Units: No 00:07:26.093 Maximum Single Source Range Length: 128 00:07:26.093 Maximum Copy Length: 128 00:07:26.093 Maximum Source Range Count: 128 00:07:26.093 NGUID/EUI64 Never Reused: No 00:07:26.093 Namespace Write Protected: No 00:07:26.093 Number of LBA Formats: 8 00:07:26.093 Current LBA Format: LBA Format #04 00:07:26.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:26.093 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:26.093 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:26.093 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:26.093 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:26.093 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:26.093 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:26.093 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:26.093 00:07:26.093 NVM Specific Namespace Data 00:07:26.093 =========================== 00:07:26.093 Logical Block Storage Tag Mask: 0 00:07:26.093 Protection Information Capabilities: 00:07:26.093 16b Guard Protection Information Storage Tag Support: No 00:07:26.093 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:26.093 Storage Tag Check Read Support: No 00:07:26.093 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.093 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:26.093 07:39:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:26.354 ===================================================== 00:07:26.354 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:26.354 ===================================================== 00:07:26.354 Controller Capabilities/Features 00:07:26.354 ================================ 00:07:26.354 Vendor ID: 1b36 00:07:26.354 Subsystem Vendor ID: 1af4 00:07:26.354 Serial Number: 12343 00:07:26.354 Model Number: QEMU NVMe Ctrl 00:07:26.354 Firmware Version: 8.0.0 00:07:26.354 Recommended Arb Burst: 6 00:07:26.354 IEEE OUI Identifier: 00 54 52 00:07:26.354 Multi-path I/O 00:07:26.354 May have multiple subsystem ports: No 00:07:26.354 May have multiple controllers: Yes 00:07:26.354 Associated with SR-IOV VF: No 00:07:26.354 Max Data Transfer Size: 524288 00:07:26.354 Max Number of Namespaces: 256 00:07:26.354 Max Number of I/O Queues: 64 00:07:26.354 NVMe Specification Version (VS): 1.4 00:07:26.354 NVMe Specification Version (Identify): 1.4 00:07:26.354 Maximum Queue Entries: 2048 00:07:26.354 Contiguous Queues Required: Yes 00:07:26.354 Arbitration Mechanisms Supported 00:07:26.354 Weighted Round Robin: Not Supported 00:07:26.354 Vendor Specific: Not Supported 00:07:26.354 Reset Timeout: 7500 ms 00:07:26.354 Doorbell Stride: 4 bytes 00:07:26.354 NVM Subsystem Reset: Not Supported 00:07:26.354 Command Sets Supported 00:07:26.354 NVM Command Set: Supported 00:07:26.354 Boot Partition: Not Supported 00:07:26.354 Memory Page Size Minimum: 4096 bytes 00:07:26.354 Memory Page Size Maximum: 65536 bytes 00:07:26.354 Persistent Memory Region: Not Supported 00:07:26.354 Optional Asynchronous Events Supported 00:07:26.354 Namespace Attribute Notices: Supported 00:07:26.354 Firmware Activation Notices: Not Supported 00:07:26.354 ANA Change Notices: Not Supported 00:07:26.354 PLE Aggregate Log Change Notices: Not Supported 00:07:26.354 LBA Status Info Alert Notices: Not Supported 00:07:26.354 EGE Aggregate Log Change Notices: Not Supported 00:07:26.354 Normal NVM Subsystem Shutdown event: Not Supported 00:07:26.354 Zone Descriptor Change Notices: Not Supported 00:07:26.354 Discovery Log Change Notices: Not Supported 00:07:26.354 Controller Attributes 00:07:26.354 128-bit Host Identifier: Not Supported 00:07:26.354 Non-Operational Permissive Mode: Not Supported 00:07:26.354 NVM Sets: Not Supported 00:07:26.354 Read Recovery Levels: Not Supported 00:07:26.354 Endurance Groups: Supported 00:07:26.354 Predictable Latency Mode: Not Supported 00:07:26.354 Traffic Based Keep ALive: Not Supported 00:07:26.354 Namespace Granularity: Not Supported 00:07:26.354 SQ Associations: Not Supported 00:07:26.354 UUID List: Not Supported 00:07:26.354 Multi-Domain Subsystem: Not Supported 00:07:26.354 Fixed Capacity Management: Not Supported 00:07:26.354 Variable Capacity Management: Not Supported 00:07:26.354 Delete Endurance Group: Not Supported 00:07:26.354 Delete NVM Set: Not Supported 00:07:26.354 Extended LBA Formats Supported: Supported 00:07:26.354 Flexible Data Placement Supported: Supported 00:07:26.354 00:07:26.354 Controller Memory Buffer Support 00:07:26.354 ================================ 00:07:26.354 Supported: No 00:07:26.354 00:07:26.354 Persistent Memory Region Support 00:07:26.354 ================================ 00:07:26.354 Supported: No 00:07:26.354 00:07:26.354 Admin Command Set Attributes 00:07:26.354 ============================ 00:07:26.354 Security Send/Receive: Not Supported 00:07:26.354 Format NVM: Supported 00:07:26.355 Firmware Activate/Download: Not Supported 00:07:26.355 Namespace Management: Supported 00:07:26.355 Device Self-Test: Not Supported 00:07:26.355 Directives: Supported 00:07:26.355 NVMe-MI: Not Supported 00:07:26.355 Virtualization Management: Not Supported 00:07:26.355 Doorbell Buffer Config: Supported 00:07:26.355 Get LBA Status Capability: Not Supported 00:07:26.355 Command & Feature Lockdown Capability: Not Supported 00:07:26.355 Abort Command Limit: 4 00:07:26.355 Async Event Request Limit: 4 00:07:26.355 Number of Firmware Slots: N/A 00:07:26.355 Firmware Slot 1 Read-Only: N/A 00:07:26.355 Firmware Activation Without Reset: N/A 00:07:26.355 Multiple Update Detection Support: N/A 00:07:26.355 Firmware Update Granularity: No Information Provided 00:07:26.355 Per-Namespace SMART Log: Yes 00:07:26.355 Asymmetric Namespace Access Log Page: Not Supported 00:07:26.355 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:26.355 Command Effects Log Page: Supported 00:07:26.355 Get Log Page Extended Data: Supported 00:07:26.355 Telemetry Log Pages: Not Supported 00:07:26.355 Persistent Event Log Pages: Not Supported 00:07:26.355 Supported Log Pages Log Page: May Support 00:07:26.355 Commands Supported & Effects Log Page: Not Supported 00:07:26.355 Feature Identifiers & Effects Log Page:May Support 00:07:26.355 NVMe-MI Commands & Effects Log Page: May Support 00:07:26.355 Data Area 4 for Telemetry Log: Not Supported 00:07:26.355 Error Log Page Entries Supported: 1 00:07:26.355 Keep Alive: Not Supported 00:07:26.355 00:07:26.355 NVM Command Set Attributes 00:07:26.355 ========================== 00:07:26.355 Submission Queue Entry Size 00:07:26.355 Max: 64 00:07:26.355 Min: 64 00:07:26.355 Completion Queue Entry Size 00:07:26.355 Max: 16 00:07:26.355 Min: 16 00:07:26.355 Number of Namespaces: 256 00:07:26.355 Compare Command: Supported 00:07:26.355 Write Uncorrectable Command: Not Supported 00:07:26.355 Dataset Management Command: Supported 00:07:26.355 Write Zeroes Command: Supported 00:07:26.355 Set Features Save Field: Supported 00:07:26.355 Reservations: Not Supported 00:07:26.355 Timestamp: Supported 00:07:26.355 Copy: Supported 00:07:26.355 Volatile Write Cache: Present 00:07:26.355 Atomic Write Unit (Normal): 1 00:07:26.355 Atomic Write Unit (PFail): 1 00:07:26.355 Atomic Compare & Write Unit: 1 00:07:26.355 Fused Compare & Write: Not Supported 00:07:26.355 Scatter-Gather List 00:07:26.355 SGL Command Set: Supported 00:07:26.355 SGL Keyed: Not Supported 00:07:26.355 SGL Bit Bucket Descriptor: Not Supported 00:07:26.355 SGL Metadata Pointer: Not Supported 00:07:26.355 Oversized SGL: Not Supported 00:07:26.355 SGL Metadata Address: Not Supported 00:07:26.355 SGL Offset: Not Supported 00:07:26.355 Transport SGL Data Block: Not Supported 00:07:26.355 Replay Protected Memory Block: Not Supported 00:07:26.355 00:07:26.355 Firmware Slot Information 00:07:26.355 ========================= 00:07:26.355 Active slot: 1 00:07:26.355 Slot 1 Firmware Revision: 1.0 00:07:26.355 00:07:26.355 00:07:26.355 Commands Supported and Effects 00:07:26.355 ============================== 00:07:26.355 Admin Commands 00:07:26.355 -------------- 00:07:26.355 Delete I/O Submission Queue (00h): Supported 00:07:26.355 Create I/O Submission Queue (01h): Supported 00:07:26.355 Get Log Page (02h): Supported 00:07:26.355 Delete I/O Completion Queue (04h): Supported 00:07:26.355 Create I/O Completion Queue (05h): Supported 00:07:26.355 Identify (06h): Supported 00:07:26.355 Abort (08h): Supported 00:07:26.355 Set Features (09h): Supported 00:07:26.355 Get Features (0Ah): Supported 00:07:26.355 Asynchronous Event Request (0Ch): Supported 00:07:26.355 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:26.355 Directive Send (19h): Supported 00:07:26.355 Directive Receive (1Ah): Supported 00:07:26.355 Virtualization Management (1Ch): Supported 00:07:26.355 Doorbell Buffer Config (7Ch): Supported 00:07:26.355 Format NVM (80h): Supported LBA-Change 00:07:26.355 I/O Commands 00:07:26.355 ------------ 00:07:26.355 Flush (00h): Supported LBA-Change 00:07:26.355 Write (01h): Supported LBA-Change 00:07:26.355 Read (02h): Supported 00:07:26.355 Compare (05h): Supported 00:07:26.355 Write Zeroes (08h): Supported LBA-Change 00:07:26.355 Dataset Management (09h): Supported LBA-Change 00:07:26.355 Unknown (0Ch): Supported 00:07:26.355 Unknown (12h): Supported 00:07:26.355 Copy (19h): Supported LBA-Change 00:07:26.355 Unknown (1Dh): Supported LBA-Change 00:07:26.355 00:07:26.355 Error Log 00:07:26.355 ========= 00:07:26.355 00:07:26.355 Arbitration 00:07:26.355 =========== 00:07:26.355 Arbitration Burst: no limit 00:07:26.355 00:07:26.355 Power Management 00:07:26.355 ================ 00:07:26.355 Number of Power States: 1 00:07:26.355 Current Power State: Power State #0 00:07:26.355 Power State #0: 00:07:26.355 Max Power: 25.00 W 00:07:26.355 Non-Operational State: Operational 00:07:26.355 Entry Latency: 16 microseconds 00:07:26.355 Exit Latency: 4 microseconds 00:07:26.355 Relative Read Throughput: 0 00:07:26.355 Relative Read Latency: 0 00:07:26.355 Relative Write Throughput: 0 00:07:26.355 Relative Write Latency: 0 00:07:26.355 Idle Power: Not Reported 00:07:26.355 Active Power: Not Reported 00:07:26.355 Non-Operational Permissive Mode: Not Supported 00:07:26.355 00:07:26.355 Health Information 00:07:26.355 ================== 00:07:26.355 Critical Warnings: 00:07:26.355 Available Spare Space: OK 00:07:26.355 Temperature: OK 00:07:26.355 Device Reliability: OK 00:07:26.355 Read Only: No 00:07:26.355 Volatile Memory Backup: OK 00:07:26.355 Current Temperature: 323 Kelvin (50 Celsius) 00:07:26.355 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:26.355 Available Spare: 0% 00:07:26.355 Available Spare Threshold: 0% 00:07:26.355 Life Percentage Used: 0% 00:07:26.355 Data Units Read: 815 00:07:26.355 Data Units Written: 744 00:07:26.355 Host Read Commands: 37348 00:07:26.355 Host Write Commands: 36772 00:07:26.355 Controller Busy Time: 0 minutes 00:07:26.355 Power Cycles: 0 00:07:26.355 Power On Hours: 0 hours 00:07:26.355 Unsafe Shutdowns: 0 00:07:26.355 Unrecoverable Media Errors: 0 00:07:26.355 Lifetime Error Log Entries: 0 00:07:26.355 Warning Temperature Time: 0 minutes 00:07:26.355 Critical Temperature Time: 0 minutes 00:07:26.355 00:07:26.355 Number of Queues 00:07:26.355 ================ 00:07:26.355 Number of I/O Submission Queues: 64 00:07:26.355 Number of I/O Completion Queues: 64 00:07:26.355 00:07:26.355 ZNS Specific Controller Data 00:07:26.355 ============================ 00:07:26.355 Zone Append Size Limit: 0 00:07:26.355 00:07:26.355 00:07:26.355 Active Namespaces 00:07:26.355 ================= 00:07:26.355 Namespace ID:1 00:07:26.355 Error Recovery Timeout: Unlimited 00:07:26.355 Command Set Identifier: NVM (00h) 00:07:26.355 Deallocate: Supported 00:07:26.355 Deallocated/Unwritten Error: Supported 00:07:26.355 Deallocated Read Value: All 0x00 00:07:26.355 Deallocate in Write Zeroes: Not Supported 00:07:26.355 Deallocated Guard Field: 0xFFFF 00:07:26.355 Flush: Supported 00:07:26.355 Reservation: Not Supported 00:07:26.355 Namespace Sharing Capabilities: Multiple Controllers 00:07:26.355 Size (in LBAs): 262144 (1GiB) 00:07:26.355 Capacity (in LBAs): 262144 (1GiB) 00:07:26.355 Utilization (in LBAs): 262144 (1GiB) 00:07:26.355 Thin Provisioning: Not Supported 00:07:26.355 Per-NS Atomic Units: No 00:07:26.355 Maximum Single Source Range Length: 128 00:07:26.355 Maximum Copy Length: 128 00:07:26.355 Maximum Source Range Count: 128 00:07:26.355 NGUID/EUI64 Never Reused: No 00:07:26.355 Namespace Write Protected: No 00:07:26.355 Endurance group ID: 1 00:07:26.355 Number of LBA Formats: 8 00:07:26.355 Current LBA Format: LBA Format #04 00:07:26.355 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:26.355 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:26.355 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:26.355 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:26.355 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:26.355 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:26.355 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:26.355 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:26.355 00:07:26.355 Get Feature FDP: 00:07:26.355 ================ 00:07:26.355 Enabled: Yes 00:07:26.355 FDP configuration index: 0 00:07:26.355 00:07:26.355 FDP configurations log page 00:07:26.355 =========================== 00:07:26.355 Number of FDP configurations: 1 00:07:26.355 Version: 0 00:07:26.355 Size: 112 00:07:26.356 FDP Configuration Descriptor: 0 00:07:26.356 Descriptor Size: 96 00:07:26.356 Reclaim Group Identifier format: 2 00:07:26.356 FDP Volatile Write Cache: Not Present 00:07:26.356 FDP Configuration: Valid 00:07:26.356 Vendor Specific Size: 0 00:07:26.356 Number of Reclaim Groups: 2 00:07:26.356 Number of Recalim Unit Handles: 8 00:07:26.356 Max Placement Identifiers: 128 00:07:26.356 Number of Namespaces Suppprted: 256 00:07:26.356 Reclaim unit Nominal Size: 6000000 bytes 00:07:26.356 Estimated Reclaim Unit Time Limit: Not Reported 00:07:26.356 RUH Desc #000: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #001: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #002: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #003: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #004: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #005: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #006: RUH Type: Initially Isolated 00:07:26.356 RUH Desc #007: RUH Type: Initially Isolated 00:07:26.356 00:07:26.356 FDP reclaim unit handle usage log page 00:07:26.356 ====================================== 00:07:26.356 Number of Reclaim Unit Handles: 8 00:07:26.356 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:26.356 RUH Usage Desc #001: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #002: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #003: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #004: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #005: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #006: RUH Attributes: Unused 00:07:26.356 RUH Usage Desc #007: RUH Attributes: Unused 00:07:26.356 00:07:26.356 FDP statistics log page 00:07:26.356 ======================= 00:07:26.356 Host bytes with metadata written: 480813056 00:07:26.356 Media bytes with metadata written: 480866304 00:07:26.356 Media bytes erased: 0 00:07:26.356 00:07:26.356 FDP events log page 00:07:26.356 =================== 00:07:26.356 Number of FDP events: 0 00:07:26.356 00:07:26.356 NVM Specific Namespace Data 00:07:26.356 =========================== 00:07:26.356 Logical Block Storage Tag Mask: 0 00:07:26.356 Protection Information Capabilities: 00:07:26.356 16b Guard Protection Information Storage Tag Support: No 00:07:26.356 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:26.356 Storage Tag Check Read Support: No 00:07:26.356 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:26.356 00:07:26.356 real 0m1.205s 00:07:26.356 user 0m0.449s 00:07:26.356 sys 0m0.523s 00:07:26.356 07:39:16 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.356 07:39:16 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 END TEST nvme_identify 00:07:26.356 ************************************ 00:07:26.356 07:39:16 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:26.356 07:39:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.356 07:39:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.356 07:39:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 START TEST nvme_perf 00:07:26.356 ************************************ 00:07:26.356 07:39:16 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:26.356 07:39:16 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:27.802 Initializing NVMe Controllers 00:07:27.802 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:27.802 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:27.802 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:27.802 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:27.802 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:27.802 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:27.802 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:27.802 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:27.802 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:27.802 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:27.802 Initialization complete. Launching workers. 00:07:27.802 ======================================================== 00:07:27.802 Latency(us) 00:07:27.802 Device Information : IOPS MiB/s Average min max 00:07:27.802 PCIE (0000:00:11.0) NSID 1 from core 0: 14011.14 164.19 9147.61 5570.51 40763.97 00:07:27.802 PCIE (0000:00:13.0) NSID 1 from core 0: 14011.14 164.19 9133.97 5579.64 39657.77 00:07:27.802 PCIE (0000:00:10.0) NSID 1 from core 0: 14011.14 164.19 9118.90 5507.26 38514.81 00:07:27.802 PCIE (0000:00:12.0) NSID 1 from core 0: 14011.14 164.19 9105.36 5581.77 37742.86 00:07:27.802 PCIE (0000:00:12.0) NSID 2 from core 0: 14011.14 164.19 9091.14 5582.64 38081.12 00:07:27.802 PCIE (0000:00:12.0) NSID 3 from core 0: 14075.12 164.94 9035.74 5576.94 28226.72 00:07:27.802 ======================================================== 00:07:27.802 Total : 84130.81 985.91 9105.40 5507.26 40763.97 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5696.591us 00:07:27.802 10.00000% : 5873.034us 00:07:27.802 25.00000% : 6099.889us 00:07:27.802 50.00000% : 6503.188us 00:07:27.802 75.00000% : 12199.778us 00:07:27.802 90.00000% : 15526.991us 00:07:27.802 95.00000% : 17341.834us 00:07:27.802 98.00000% : 18551.729us 00:07:27.802 99.00000% : 19358.326us 00:07:27.802 99.50000% : 30852.332us 00:07:27.802 99.90000% : 40531.495us 00:07:27.802 99.99000% : 40733.145us 00:07:27.802 99.99900% : 40934.794us 00:07:27.802 99.99990% : 40934.794us 00:07:27.802 99.99999% : 40934.794us 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5696.591us 00:07:27.802 10.00000% : 5873.034us 00:07:27.802 25.00000% : 6125.095us 00:07:27.802 50.00000% : 6503.188us 00:07:27.802 75.00000% : 12149.366us 00:07:27.802 90.00000% : 15728.640us 00:07:27.802 95.00000% : 17140.185us 00:07:27.802 98.00000% : 18551.729us 00:07:27.802 99.00000% : 20265.748us 00:07:27.802 99.50000% : 29440.788us 00:07:27.802 99.90000% : 39321.600us 00:07:27.802 99.99000% : 39724.898us 00:07:27.802 99.99900% : 39724.898us 00:07:27.802 99.99990% : 39724.898us 00:07:27.802 99.99999% : 39724.898us 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5620.972us 00:07:27.802 10.00000% : 5822.622us 00:07:27.802 25.00000% : 6099.889us 00:07:27.802 50.00000% : 6553.600us 00:07:27.802 75.00000% : 12098.954us 00:07:27.802 90.00000% : 15829.465us 00:07:27.802 95.00000% : 17140.185us 00:07:27.802 98.00000% : 18854.203us 00:07:27.802 99.00000% : 20064.098us 00:07:27.802 99.50000% : 28230.892us 00:07:27.802 99.90000% : 38313.354us 00:07:27.802 99.99000% : 38515.003us 00:07:27.802 99.99900% : 38515.003us 00:07:27.802 99.99990% : 38515.003us 00:07:27.802 99.99999% : 38515.003us 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5696.591us 00:07:27.802 10.00000% : 5873.034us 00:07:27.802 25.00000% : 6125.095us 00:07:27.802 50.00000% : 6503.188us 00:07:27.802 75.00000% : 12149.366us 00:07:27.802 90.00000% : 15728.640us 00:07:27.802 95.00000% : 17341.834us 00:07:27.802 98.00000% : 18854.203us 00:07:27.802 99.00000% : 20265.748us 00:07:27.802 99.50000% : 26617.698us 00:07:27.802 99.90000% : 37506.757us 00:07:27.802 99.99000% : 37910.055us 00:07:27.802 99.99900% : 37910.055us 00:07:27.802 99.99990% : 37910.055us 00:07:27.802 99.99999% : 37910.055us 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5696.591us 00:07:27.802 10.00000% : 5873.034us 00:07:27.802 25.00000% : 6125.095us 00:07:27.802 50.00000% : 6503.188us 00:07:27.802 75.00000% : 12199.778us 00:07:27.802 90.00000% : 15325.342us 00:07:27.802 95.00000% : 17241.009us 00:07:27.802 98.00000% : 18854.203us 00:07:27.802 99.00000% : 19761.625us 00:07:27.802 99.50000% : 28029.243us 00:07:27.802 99.90000% : 37708.406us 00:07:27.802 99.99000% : 38111.705us 00:07:27.802 99.99900% : 38111.705us 00:07:27.802 99.99990% : 38111.705us 00:07:27.802 99.99999% : 38111.705us 00:07:27.802 00:07:27.802 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.802 ================================================================================= 00:07:27.802 1.00000% : 5696.591us 00:07:27.802 10.00000% : 5873.034us 00:07:27.802 25.00000% : 6125.095us 00:07:27.802 50.00000% : 6503.188us 00:07:27.802 75.00000% : 12250.191us 00:07:27.802 90.00000% : 15526.991us 00:07:27.802 95.00000% : 17442.658us 00:07:27.802 98.00000% : 18652.554us 00:07:27.802 99.00000% : 19055.852us 00:07:27.802 99.50000% : 19459.151us 00:07:27.802 99.90000% : 28029.243us 00:07:27.802 99.99000% : 28230.892us 00:07:27.802 99.99900% : 28230.892us 00:07:27.802 99.99990% : 28230.892us 00:07:27.802 99.99999% : 28230.892us 00:07:27.802 00:07:27.802 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.802 ============================================================================== 00:07:27.802 Range in us Cumulative IO count 00:07:27.802 5545.354 - 5570.560: 0.0071% ( 1) 00:07:27.802 5570.560 - 5595.766: 0.0357% ( 4) 00:07:27.802 5595.766 - 5620.972: 0.1070% ( 10) 00:07:27.802 5620.972 - 5646.178: 0.3211% ( 30) 00:07:27.802 5646.178 - 5671.385: 0.7848% ( 65) 00:07:27.802 5671.385 - 5696.591: 1.6481% ( 121) 00:07:27.802 5696.591 - 5721.797: 2.5043% ( 120) 00:07:27.802 5721.797 - 5747.003: 3.5959% ( 153) 00:07:27.802 5747.003 - 5772.209: 4.8088% ( 170) 00:07:27.802 5772.209 - 5797.415: 6.1858% ( 193) 00:07:27.802 5797.415 - 5822.622: 7.5271% ( 188) 00:07:27.802 5822.622 - 5847.828: 8.9112% ( 194) 00:07:27.803 5847.828 - 5873.034: 10.4381% ( 214) 00:07:27.803 5873.034 - 5898.240: 11.9863% ( 217) 00:07:27.803 5898.240 - 5923.446: 13.5131% ( 214) 00:07:27.803 5923.446 - 5948.652: 15.1684% ( 232) 00:07:27.803 5948.652 - 5973.858: 16.8165% ( 231) 00:07:27.803 5973.858 - 5999.065: 18.4932% ( 235) 00:07:27.803 5999.065 - 6024.271: 20.0414% ( 217) 00:07:27.803 6024.271 - 6049.477: 21.6895% ( 231) 00:07:27.803 6049.477 - 6074.683: 23.3733% ( 236) 00:07:27.803 6074.683 - 6099.889: 25.0357% ( 233) 00:07:27.803 6099.889 - 6125.095: 26.6981% ( 233) 00:07:27.803 6125.095 - 6150.302: 28.2962% ( 224) 00:07:27.803 6150.302 - 6175.508: 29.9658% ( 234) 00:07:27.803 6175.508 - 6200.714: 31.5925% ( 228) 00:07:27.803 6200.714 - 6225.920: 33.2763% ( 236) 00:07:27.803 6225.920 - 6251.126: 34.9172% ( 230) 00:07:27.803 6251.126 - 6276.332: 36.6010% ( 236) 00:07:27.803 6276.332 - 6301.538: 38.2420% ( 230) 00:07:27.803 6301.538 - 6326.745: 39.8830% ( 230) 00:07:27.803 6326.745 - 6351.951: 41.5811% ( 238) 00:07:27.803 6351.951 - 6377.157: 43.1864% ( 225) 00:07:27.803 6377.157 - 6402.363: 44.8701% ( 236) 00:07:27.803 6402.363 - 6427.569: 46.4969% ( 228) 00:07:27.803 6427.569 - 6452.775: 48.0808% ( 222) 00:07:27.803 6452.775 - 6503.188: 50.8205% ( 384) 00:07:27.803 6503.188 - 6553.600: 52.7183% ( 266) 00:07:27.803 6553.600 - 6604.012: 54.0382% ( 185) 00:07:27.803 6604.012 - 6654.425: 54.9015% ( 121) 00:07:27.803 6654.425 - 6704.837: 55.5651% ( 93) 00:07:27.803 6704.837 - 6755.249: 56.0859% ( 73) 00:07:27.803 6755.249 - 6805.662: 56.4426% ( 50) 00:07:27.803 6805.662 - 6856.074: 56.7423% ( 42) 00:07:27.803 6856.074 - 6906.486: 57.0420% ( 42) 00:07:27.803 6906.486 - 6956.898: 57.3131% ( 38) 00:07:27.803 6956.898 - 7007.311: 57.5628% ( 35) 00:07:27.803 7007.311 - 7057.723: 57.8054% ( 34) 00:07:27.803 7057.723 - 7108.135: 57.9909% ( 26) 00:07:27.803 7108.135 - 7158.548: 58.1621% ( 24) 00:07:27.803 7158.548 - 7208.960: 58.3405% ( 25) 00:07:27.803 7208.960 - 7259.372: 58.5688% ( 32) 00:07:27.803 7259.372 - 7309.785: 58.8042% ( 33) 00:07:27.803 7309.785 - 7360.197: 59.0254% ( 31) 00:07:27.803 7360.197 - 7410.609: 59.2608% ( 33) 00:07:27.803 7410.609 - 7461.022: 59.5177% ( 36) 00:07:27.803 7461.022 - 7511.434: 59.7531% ( 33) 00:07:27.803 7511.434 - 7561.846: 59.9743% ( 31) 00:07:27.803 7561.846 - 7612.258: 60.1955% ( 31) 00:07:27.803 7612.258 - 7662.671: 60.3810% ( 26) 00:07:27.803 7662.671 - 7713.083: 60.5808% ( 28) 00:07:27.803 7713.083 - 7763.495: 60.7449% ( 23) 00:07:27.803 7763.495 - 7813.908: 60.9161% ( 24) 00:07:27.803 7813.908 - 7864.320: 61.0873% ( 24) 00:07:27.803 7864.320 - 7914.732: 61.2586% ( 24) 00:07:27.803 7914.732 - 7965.145: 61.4583% ( 28) 00:07:27.803 7965.145 - 8015.557: 61.6510% ( 27) 00:07:27.803 8015.557 - 8065.969: 61.8222% ( 24) 00:07:27.803 8065.969 - 8116.382: 61.9578% ( 19) 00:07:27.803 8116.382 - 8166.794: 62.1290% ( 24) 00:07:27.803 8166.794 - 8217.206: 62.2788% ( 21) 00:07:27.803 8217.206 - 8267.618: 62.4215% ( 20) 00:07:27.803 8267.618 - 8318.031: 62.5285% ( 15) 00:07:27.803 8318.031 - 8368.443: 62.6427% ( 16) 00:07:27.803 8368.443 - 8418.855: 62.7354% ( 13) 00:07:27.803 8418.855 - 8469.268: 62.8282% ( 13) 00:07:27.803 8469.268 - 8519.680: 62.8853% ( 8) 00:07:27.803 8519.680 - 8570.092: 62.9566% ( 10) 00:07:27.803 8570.092 - 8620.505: 63.0351% ( 11) 00:07:27.803 8620.505 - 8670.917: 63.1635% ( 18) 00:07:27.803 8670.917 - 8721.329: 63.3205% ( 22) 00:07:27.803 8721.329 - 8771.742: 63.4846% ( 23) 00:07:27.803 8771.742 - 8822.154: 63.6701% ( 26) 00:07:27.803 8822.154 - 8872.566: 63.8913% ( 31) 00:07:27.803 8872.566 - 8922.978: 64.1124% ( 31) 00:07:27.803 8922.978 - 8973.391: 64.3193% ( 29) 00:07:27.803 8973.391 - 9023.803: 64.4977% ( 25) 00:07:27.803 9023.803 - 9074.215: 64.7046% ( 29) 00:07:27.803 9074.215 - 9124.628: 64.9258% ( 31) 00:07:27.803 9124.628 - 9175.040: 65.1184% ( 27) 00:07:27.803 9175.040 - 9225.452: 65.2968% ( 25) 00:07:27.803 9225.452 - 9275.865: 65.4823% ( 26) 00:07:27.803 9275.865 - 9326.277: 65.6607% ( 25) 00:07:27.803 9326.277 - 9376.689: 65.8462% ( 26) 00:07:27.803 9376.689 - 9427.102: 66.0388% ( 27) 00:07:27.803 9427.102 - 9477.514: 66.2314% ( 27) 00:07:27.803 9477.514 - 9527.926: 66.4241% ( 27) 00:07:27.803 9527.926 - 9578.338: 66.5953% ( 24) 00:07:27.803 9578.338 - 9628.751: 66.8094% ( 30) 00:07:27.803 9628.751 - 9679.163: 67.0805% ( 38) 00:07:27.803 9679.163 - 9729.575: 67.2660% ( 26) 00:07:27.803 9729.575 - 9779.988: 67.3873% ( 17) 00:07:27.803 9779.988 - 9830.400: 67.5300% ( 20) 00:07:27.803 9830.400 - 9880.812: 67.6513% ( 17) 00:07:27.803 9880.812 - 9931.225: 67.7797% ( 18) 00:07:27.803 9931.225 - 9981.637: 67.8938% ( 16) 00:07:27.803 9981.637 - 10032.049: 67.9866% ( 13) 00:07:27.803 10032.049 - 10082.462: 68.0865% ( 14) 00:07:27.803 10082.462 - 10132.874: 68.2149% ( 18) 00:07:27.803 10132.874 - 10183.286: 68.3005% ( 12) 00:07:27.803 10183.286 - 10233.698: 68.4218% ( 17) 00:07:27.803 10233.698 - 10284.111: 68.5431% ( 17) 00:07:27.803 10284.111 - 10334.523: 68.6858% ( 20) 00:07:27.803 10334.523 - 10384.935: 68.7785% ( 13) 00:07:27.803 10384.935 - 10435.348: 68.8927% ( 16) 00:07:27.803 10435.348 - 10485.760: 68.9712% ( 11) 00:07:27.803 10485.760 - 10536.172: 69.0711% ( 14) 00:07:27.803 10536.172 - 10586.585: 69.1781% ( 15) 00:07:27.803 10586.585 - 10636.997: 69.2637% ( 12) 00:07:27.803 10636.997 - 10687.409: 69.3850% ( 17) 00:07:27.803 10687.409 - 10737.822: 69.5063% ( 17) 00:07:27.803 10737.822 - 10788.234: 69.6204% ( 16) 00:07:27.803 10788.234 - 10838.646: 69.7346% ( 16) 00:07:27.803 10838.646 - 10889.058: 69.8487% ( 16) 00:07:27.803 10889.058 - 10939.471: 69.9558% ( 15) 00:07:27.803 10939.471 - 10989.883: 70.0557% ( 14) 00:07:27.803 10989.883 - 11040.295: 70.1841% ( 18) 00:07:27.803 11040.295 - 11090.708: 70.3125% ( 18) 00:07:27.803 11090.708 - 11141.120: 70.4195% ( 15) 00:07:27.803 11141.120 - 11191.532: 70.5693% ( 21) 00:07:27.803 11191.532 - 11241.945: 70.8048% ( 33) 00:07:27.803 11241.945 - 11292.357: 70.9332% ( 18) 00:07:27.803 11292.357 - 11342.769: 71.1187% ( 26) 00:07:27.803 11342.769 - 11393.182: 71.2828% ( 23) 00:07:27.803 11393.182 - 11443.594: 71.4541% ( 24) 00:07:27.803 11443.594 - 11494.006: 71.6253% ( 24) 00:07:27.803 11494.006 - 11544.418: 71.8037% ( 25) 00:07:27.803 11544.418 - 11594.831: 72.0248% ( 31) 00:07:27.803 11594.831 - 11645.243: 72.2317% ( 29) 00:07:27.803 11645.243 - 11695.655: 72.4458% ( 30) 00:07:27.803 11695.655 - 11746.068: 72.6812% ( 33) 00:07:27.803 11746.068 - 11796.480: 72.8881% ( 29) 00:07:27.803 11796.480 - 11846.892: 73.1521% ( 37) 00:07:27.803 11846.892 - 11897.305: 73.4446% ( 41) 00:07:27.803 11897.305 - 11947.717: 73.7372% ( 41) 00:07:27.803 11947.717 - 11998.129: 74.0368% ( 42) 00:07:27.803 11998.129 - 12048.542: 74.2865% ( 35) 00:07:27.803 12048.542 - 12098.954: 74.5434% ( 36) 00:07:27.803 12098.954 - 12149.366: 74.7931% ( 35) 00:07:27.803 12149.366 - 12199.778: 75.0214% ( 32) 00:07:27.803 12199.778 - 12250.191: 75.2925% ( 38) 00:07:27.803 12250.191 - 12300.603: 75.5494% ( 36) 00:07:27.803 12300.603 - 12351.015: 75.7920% ( 34) 00:07:27.803 12351.015 - 12401.428: 76.0559% ( 37) 00:07:27.803 12401.428 - 12451.840: 76.3057% ( 35) 00:07:27.803 12451.840 - 12502.252: 76.5696% ( 37) 00:07:27.803 12502.252 - 12552.665: 76.8550% ( 40) 00:07:27.803 12552.665 - 12603.077: 77.2046% ( 49) 00:07:27.803 12603.077 - 12653.489: 77.4686% ( 37) 00:07:27.803 12653.489 - 12703.902: 77.7611% ( 41) 00:07:27.803 12703.902 - 12754.314: 78.0180% ( 36) 00:07:27.803 12754.314 - 12804.726: 78.3034% ( 40) 00:07:27.803 12804.726 - 12855.138: 78.5888% ( 40) 00:07:27.803 12855.138 - 12905.551: 78.8527% ( 37) 00:07:27.803 12905.551 - 13006.375: 79.4164% ( 79) 00:07:27.803 13006.375 - 13107.200: 80.0728% ( 92) 00:07:27.803 13107.200 - 13208.025: 80.6364% ( 79) 00:07:27.803 13208.025 - 13308.849: 81.2215% ( 82) 00:07:27.803 13308.849 - 13409.674: 81.7637% ( 76) 00:07:27.803 13409.674 - 13510.498: 82.3202% ( 78) 00:07:27.803 13510.498 - 13611.323: 82.8054% ( 68) 00:07:27.803 13611.323 - 13712.148: 83.3119% ( 71) 00:07:27.803 13712.148 - 13812.972: 83.7900% ( 67) 00:07:27.803 13812.972 - 13913.797: 84.3536% ( 79) 00:07:27.803 13913.797 - 14014.622: 84.8530% ( 70) 00:07:27.803 14014.622 - 14115.446: 85.2383% ( 54) 00:07:27.803 14115.446 - 14216.271: 85.5522% ( 44) 00:07:27.803 14216.271 - 14317.095: 85.8590% ( 43) 00:07:27.803 14317.095 - 14417.920: 86.1729% ( 44) 00:07:27.803 14417.920 - 14518.745: 86.5225% ( 49) 00:07:27.803 14518.745 - 14619.569: 86.9007% ( 53) 00:07:27.803 14619.569 - 14720.394: 87.3288% ( 60) 00:07:27.803 14720.394 - 14821.218: 87.6998% ( 52) 00:07:27.803 14821.218 - 14922.043: 88.0422% ( 48) 00:07:27.803 14922.043 - 15022.868: 88.3490% ( 43) 00:07:27.803 15022.868 - 15123.692: 88.6701% ( 45) 00:07:27.803 15123.692 - 15224.517: 89.0625% ( 55) 00:07:27.803 15224.517 - 15325.342: 89.4834% ( 59) 00:07:27.803 15325.342 - 15426.166: 89.9044% ( 59) 00:07:27.803 15426.166 - 15526.991: 90.3039% ( 56) 00:07:27.803 15526.991 - 15627.815: 90.7677% ( 65) 00:07:27.803 15627.815 - 15728.640: 91.2814% ( 72) 00:07:27.803 15728.640 - 15829.465: 91.6096% ( 46) 00:07:27.803 15829.465 - 15930.289: 91.9449% ( 47) 00:07:27.803 15930.289 - 16031.114: 92.2874% ( 48) 00:07:27.803 16031.114 - 16131.938: 92.6084% ( 45) 00:07:27.803 16131.938 - 16232.763: 92.8938% ( 40) 00:07:27.803 16232.763 - 16333.588: 93.1293% ( 33) 00:07:27.803 16333.588 - 16434.412: 93.3433% ( 30) 00:07:27.803 16434.412 - 16535.237: 93.5217% ( 25) 00:07:27.804 16535.237 - 16636.062: 93.6929% ( 24) 00:07:27.804 16636.062 - 16736.886: 93.8713% ( 25) 00:07:27.804 16736.886 - 16837.711: 94.0711% ( 28) 00:07:27.804 16837.711 - 16938.535: 94.3065% ( 33) 00:07:27.804 16938.535 - 17039.360: 94.5063% ( 28) 00:07:27.804 17039.360 - 17140.185: 94.6775% ( 24) 00:07:27.804 17140.185 - 17241.009: 94.9130% ( 33) 00:07:27.804 17241.009 - 17341.834: 95.1627% ( 35) 00:07:27.804 17341.834 - 17442.658: 95.3624% ( 28) 00:07:27.804 17442.658 - 17543.483: 95.5551% ( 27) 00:07:27.804 17543.483 - 17644.308: 95.7905% ( 33) 00:07:27.804 17644.308 - 17745.132: 96.0759% ( 40) 00:07:27.804 17745.132 - 17845.957: 96.4041% ( 46) 00:07:27.804 17845.957 - 17946.782: 96.6895% ( 40) 00:07:27.804 17946.782 - 18047.606: 96.9392% ( 35) 00:07:27.804 18047.606 - 18148.431: 97.1747% ( 33) 00:07:27.804 18148.431 - 18249.255: 97.3602% ( 26) 00:07:27.804 18249.255 - 18350.080: 97.5599% ( 28) 00:07:27.804 18350.080 - 18450.905: 97.7597% ( 28) 00:07:27.804 18450.905 - 18551.729: 98.0166% ( 36) 00:07:27.804 18551.729 - 18652.554: 98.1878% ( 24) 00:07:27.804 18652.554 - 18753.378: 98.3233% ( 19) 00:07:27.804 18753.378 - 18854.203: 98.4660% ( 20) 00:07:27.804 18854.203 - 18955.028: 98.5945% ( 18) 00:07:27.804 18955.028 - 19055.852: 98.7300% ( 19) 00:07:27.804 19055.852 - 19156.677: 98.8656% ( 19) 00:07:27.804 19156.677 - 19257.502: 98.9655% ( 14) 00:07:27.804 19257.502 - 19358.326: 99.0225% ( 8) 00:07:27.804 19358.326 - 19459.151: 99.0725% ( 7) 00:07:27.804 19459.151 - 19559.975: 99.0868% ( 2) 00:07:27.804 29239.138 - 29440.788: 99.1153% ( 4) 00:07:27.804 29440.788 - 29642.437: 99.1724% ( 8) 00:07:27.804 29642.437 - 29844.086: 99.2366% ( 9) 00:07:27.804 29844.086 - 30045.735: 99.2937% ( 8) 00:07:27.804 30045.735 - 30247.385: 99.3579% ( 9) 00:07:27.804 30247.385 - 30449.034: 99.4221% ( 9) 00:07:27.804 30449.034 - 30650.683: 99.4792% ( 8) 00:07:27.804 30650.683 - 30852.332: 99.5434% ( 9) 00:07:27.804 39119.951 - 39321.600: 99.5719% ( 4) 00:07:27.804 39321.600 - 39523.249: 99.6361% ( 9) 00:07:27.804 39523.249 - 39724.898: 99.6789% ( 6) 00:07:27.804 39724.898 - 39926.548: 99.7432% ( 9) 00:07:27.804 39926.548 - 40128.197: 99.8074% ( 9) 00:07:27.804 40128.197 - 40329.846: 99.8644% ( 8) 00:07:27.804 40329.846 - 40531.495: 99.9287% ( 9) 00:07:27.804 40531.495 - 40733.145: 99.9929% ( 9) 00:07:27.804 40733.145 - 40934.794: 100.0000% ( 1) 00:07:27.804 00:07:27.804 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.804 ============================================================================== 00:07:27.804 Range in us Cumulative IO count 00:07:27.804 5570.560 - 5595.766: 0.0285% ( 4) 00:07:27.804 5595.766 - 5620.972: 0.1712% ( 20) 00:07:27.804 5620.972 - 5646.178: 0.3781% ( 29) 00:07:27.804 5646.178 - 5671.385: 0.7920% ( 58) 00:07:27.804 5671.385 - 5696.591: 1.4840% ( 97) 00:07:27.804 5696.591 - 5721.797: 2.4401% ( 134) 00:07:27.804 5721.797 - 5747.003: 3.4889% ( 147) 00:07:27.804 5747.003 - 5772.209: 4.7374% ( 175) 00:07:27.804 5772.209 - 5797.415: 6.2143% ( 207) 00:07:27.804 5797.415 - 5822.622: 7.4914% ( 179) 00:07:27.804 5822.622 - 5847.828: 8.8185% ( 186) 00:07:27.804 5847.828 - 5873.034: 10.4024% ( 222) 00:07:27.804 5873.034 - 5898.240: 11.9720% ( 220) 00:07:27.804 5898.240 - 5923.446: 13.5060% ( 215) 00:07:27.804 5923.446 - 5948.652: 15.0400% ( 215) 00:07:27.804 5948.652 - 5973.858: 16.6881% ( 231) 00:07:27.804 5973.858 - 5999.065: 18.2720% ( 222) 00:07:27.804 5999.065 - 6024.271: 19.9344% ( 233) 00:07:27.804 6024.271 - 6049.477: 21.5397% ( 225) 00:07:27.804 6049.477 - 6074.683: 23.1664% ( 228) 00:07:27.804 6074.683 - 6099.889: 24.8573% ( 237) 00:07:27.804 6099.889 - 6125.095: 26.4697% ( 226) 00:07:27.804 6125.095 - 6150.302: 28.1321% ( 233) 00:07:27.804 6150.302 - 6175.508: 29.7803% ( 231) 00:07:27.804 6175.508 - 6200.714: 31.4712% ( 237) 00:07:27.804 6200.714 - 6225.920: 33.1193% ( 231) 00:07:27.804 6225.920 - 6251.126: 34.7531% ( 229) 00:07:27.804 6251.126 - 6276.332: 36.3799% ( 228) 00:07:27.804 6276.332 - 6301.538: 38.0351% ( 232) 00:07:27.804 6301.538 - 6326.745: 39.6618% ( 228) 00:07:27.804 6326.745 - 6351.951: 41.3884% ( 242) 00:07:27.804 6351.951 - 6377.157: 43.0651% ( 235) 00:07:27.804 6377.157 - 6402.363: 44.7417% ( 235) 00:07:27.804 6402.363 - 6427.569: 46.4326% ( 237) 00:07:27.804 6427.569 - 6452.775: 48.0522% ( 227) 00:07:27.804 6452.775 - 6503.188: 50.7420% ( 377) 00:07:27.804 6503.188 - 6553.600: 52.6898% ( 273) 00:07:27.804 6553.600 - 6604.012: 54.0525% ( 191) 00:07:27.804 6604.012 - 6654.425: 54.9943% ( 132) 00:07:27.804 6654.425 - 6704.837: 55.6293% ( 89) 00:07:27.804 6704.837 - 6755.249: 56.1287% ( 70) 00:07:27.804 6755.249 - 6805.662: 56.4926% ( 51) 00:07:27.804 6805.662 - 6856.074: 56.8350% ( 48) 00:07:27.804 6856.074 - 6906.486: 57.1276% ( 41) 00:07:27.804 6906.486 - 6956.898: 57.3844% ( 36) 00:07:27.804 6956.898 - 7007.311: 57.6341% ( 35) 00:07:27.804 7007.311 - 7057.723: 57.8838% ( 35) 00:07:27.804 7057.723 - 7108.135: 58.1050% ( 31) 00:07:27.804 7108.135 - 7158.548: 58.2977% ( 27) 00:07:27.804 7158.548 - 7208.960: 58.4974% ( 28) 00:07:27.804 7208.960 - 7259.372: 58.6544% ( 22) 00:07:27.804 7259.372 - 7309.785: 58.8114% ( 22) 00:07:27.804 7309.785 - 7360.197: 58.9541% ( 20) 00:07:27.804 7360.197 - 7410.609: 59.1110% ( 22) 00:07:27.804 7410.609 - 7461.022: 59.2894% ( 25) 00:07:27.804 7461.022 - 7511.434: 59.4820% ( 27) 00:07:27.804 7511.434 - 7561.846: 59.6604% ( 25) 00:07:27.804 7561.846 - 7612.258: 59.8459% ( 26) 00:07:27.804 7612.258 - 7662.671: 60.0314% ( 26) 00:07:27.804 7662.671 - 7713.083: 60.2526% ( 31) 00:07:27.804 7713.083 - 7763.495: 60.4167% ( 23) 00:07:27.804 7763.495 - 7813.908: 60.5736% ( 22) 00:07:27.804 7813.908 - 7864.320: 60.7306% ( 22) 00:07:27.804 7864.320 - 7914.732: 60.9018% ( 24) 00:07:27.804 7914.732 - 7965.145: 61.0802% ( 25) 00:07:27.804 7965.145 - 8015.557: 61.2443% ( 23) 00:07:27.804 8015.557 - 8065.969: 61.4084% ( 23) 00:07:27.804 8065.969 - 8116.382: 61.5939% ( 26) 00:07:27.804 8116.382 - 8166.794: 61.7651% ( 24) 00:07:27.804 8166.794 - 8217.206: 61.9364% ( 24) 00:07:27.804 8217.206 - 8267.618: 62.1504% ( 30) 00:07:27.804 8267.618 - 8318.031: 62.3502% ( 28) 00:07:27.804 8318.031 - 8368.443: 62.4857% ( 19) 00:07:27.804 8368.443 - 8418.855: 62.5999% ( 16) 00:07:27.804 8418.855 - 8469.268: 62.7568% ( 22) 00:07:27.804 8469.268 - 8519.680: 62.9281% ( 24) 00:07:27.804 8519.680 - 8570.092: 63.1207% ( 27) 00:07:27.804 8570.092 - 8620.505: 63.3562% ( 33) 00:07:27.804 8620.505 - 8670.917: 63.5631% ( 29) 00:07:27.804 8670.917 - 8721.329: 63.7771% ( 30) 00:07:27.804 8721.329 - 8771.742: 63.9341% ( 22) 00:07:27.804 8771.742 - 8822.154: 64.0910% ( 22) 00:07:27.804 8822.154 - 8872.566: 64.2694% ( 25) 00:07:27.804 8872.566 - 8922.978: 64.4478% ( 25) 00:07:27.804 8922.978 - 8973.391: 64.6547% ( 29) 00:07:27.804 8973.391 - 9023.803: 64.9187% ( 37) 00:07:27.804 9023.803 - 9074.215: 65.1612% ( 34) 00:07:27.804 9074.215 - 9124.628: 65.3682% ( 29) 00:07:27.804 9124.628 - 9175.040: 65.5965% ( 32) 00:07:27.804 9175.040 - 9225.452: 65.7962% ( 28) 00:07:27.804 9225.452 - 9275.865: 65.9746% ( 25) 00:07:27.804 9275.865 - 9326.277: 66.1744% ( 28) 00:07:27.804 9326.277 - 9376.689: 66.3599% ( 26) 00:07:27.804 9376.689 - 9427.102: 66.6167% ( 36) 00:07:27.804 9427.102 - 9477.514: 66.7951% ( 25) 00:07:27.804 9477.514 - 9527.926: 66.9521% ( 22) 00:07:27.804 9527.926 - 9578.338: 67.1019% ( 21) 00:07:27.804 9578.338 - 9628.751: 67.2303% ( 18) 00:07:27.804 9628.751 - 9679.163: 67.3730% ( 20) 00:07:27.804 9679.163 - 9729.575: 67.5157% ( 20) 00:07:27.804 9729.575 - 9779.988: 67.6513% ( 19) 00:07:27.804 9779.988 - 9830.400: 67.8011% ( 21) 00:07:27.804 9830.400 - 9880.812: 67.8867% ( 12) 00:07:27.804 9880.812 - 9931.225: 67.9866% ( 14) 00:07:27.804 9931.225 - 9981.637: 68.0722% ( 12) 00:07:27.804 9981.637 - 10032.049: 68.1507% ( 11) 00:07:27.804 10032.049 - 10082.462: 68.2648% ( 16) 00:07:27.804 10082.462 - 10132.874: 68.3362% ( 10) 00:07:27.804 10132.874 - 10183.286: 68.4289% ( 13) 00:07:27.804 10183.286 - 10233.698: 68.5502% ( 17) 00:07:27.804 10233.698 - 10284.111: 68.6501% ( 14) 00:07:27.804 10284.111 - 10334.523: 68.7357% ( 12) 00:07:27.804 10334.523 - 10384.935: 68.8213% ( 12) 00:07:27.804 10384.935 - 10435.348: 68.9355% ( 16) 00:07:27.804 10435.348 - 10485.760: 69.0568% ( 17) 00:07:27.804 10485.760 - 10536.172: 69.1709% ( 16) 00:07:27.804 10536.172 - 10586.585: 69.2994% ( 18) 00:07:27.804 10586.585 - 10636.997: 69.4349% ( 19) 00:07:27.804 10636.997 - 10687.409: 69.5705% ( 19) 00:07:27.804 10687.409 - 10737.822: 69.7061% ( 19) 00:07:27.804 10737.822 - 10788.234: 69.7917% ( 12) 00:07:27.804 10788.234 - 10838.646: 69.8987% ( 15) 00:07:27.804 10838.646 - 10889.058: 70.0057% ( 15) 00:07:27.804 10889.058 - 10939.471: 70.0985% ( 13) 00:07:27.804 10939.471 - 10989.883: 70.1698% ( 10) 00:07:27.804 10989.883 - 11040.295: 70.2412% ( 10) 00:07:27.804 11040.295 - 11090.708: 70.3268% ( 12) 00:07:27.804 11090.708 - 11141.120: 70.3981% ( 10) 00:07:27.804 11141.120 - 11191.532: 70.4766% ( 11) 00:07:27.804 11191.532 - 11241.945: 70.6050% ( 18) 00:07:27.804 11241.945 - 11292.357: 70.7406% ( 19) 00:07:27.804 11292.357 - 11342.769: 70.8761% ( 19) 00:07:27.804 11342.769 - 11393.182: 71.0046% ( 18) 00:07:27.804 11393.182 - 11443.594: 71.1829% ( 25) 00:07:27.804 11443.594 - 11494.006: 71.3827% ( 28) 00:07:27.804 11494.006 - 11544.418: 71.6824% ( 42) 00:07:27.804 11544.418 - 11594.831: 71.9178% ( 33) 00:07:27.804 11594.831 - 11645.243: 72.1533% ( 33) 00:07:27.804 11645.243 - 11695.655: 72.4172% ( 37) 00:07:27.805 11695.655 - 11746.068: 72.6741% ( 36) 00:07:27.805 11746.068 - 11796.480: 72.9595% ( 40) 00:07:27.805 11796.480 - 11846.892: 73.2734% ( 44) 00:07:27.805 11846.892 - 11897.305: 73.5873% ( 44) 00:07:27.805 11897.305 - 11947.717: 73.9512% ( 51) 00:07:27.805 11947.717 - 11998.129: 74.3222% ( 52) 00:07:27.805 11998.129 - 12048.542: 74.6361% ( 44) 00:07:27.805 12048.542 - 12098.954: 74.9358% ( 42) 00:07:27.805 12098.954 - 12149.366: 75.2783% ( 48) 00:07:27.805 12149.366 - 12199.778: 75.6279% ( 49) 00:07:27.805 12199.778 - 12250.191: 75.9917% ( 51) 00:07:27.805 12250.191 - 12300.603: 76.2914% ( 42) 00:07:27.805 12300.603 - 12351.015: 76.6196% ( 46) 00:07:27.805 12351.015 - 12401.428: 76.9121% ( 41) 00:07:27.805 12401.428 - 12451.840: 77.2974% ( 54) 00:07:27.805 12451.840 - 12502.252: 77.5899% ( 41) 00:07:27.805 12502.252 - 12552.665: 77.8824% ( 41) 00:07:27.805 12552.665 - 12603.077: 78.1821% ( 42) 00:07:27.805 12603.077 - 12653.489: 78.4318% ( 35) 00:07:27.805 12653.489 - 12703.902: 78.7172% ( 40) 00:07:27.805 12703.902 - 12754.314: 79.0382% ( 45) 00:07:27.805 12754.314 - 12804.726: 79.3379% ( 42) 00:07:27.805 12804.726 - 12855.138: 79.5947% ( 36) 00:07:27.805 12855.138 - 12905.551: 79.8587% ( 37) 00:07:27.805 12905.551 - 13006.375: 80.4081% ( 77) 00:07:27.805 13006.375 - 13107.200: 80.8647% ( 64) 00:07:27.805 13107.200 - 13208.025: 81.3642% ( 70) 00:07:27.805 13208.025 - 13308.849: 81.8279% ( 65) 00:07:27.805 13308.849 - 13409.674: 82.3059% ( 67) 00:07:27.805 13409.674 - 13510.498: 82.7911% ( 68) 00:07:27.805 13510.498 - 13611.323: 83.2691% ( 67) 00:07:27.805 13611.323 - 13712.148: 83.6829% ( 58) 00:07:27.805 13712.148 - 13812.972: 83.9826% ( 42) 00:07:27.805 13812.972 - 13913.797: 84.3893% ( 57) 00:07:27.805 13913.797 - 14014.622: 84.7175% ( 46) 00:07:27.805 14014.622 - 14115.446: 85.0029% ( 40) 00:07:27.805 14115.446 - 14216.271: 85.3096% ( 43) 00:07:27.805 14216.271 - 14317.095: 85.6307% ( 45) 00:07:27.805 14317.095 - 14417.920: 85.9090% ( 39) 00:07:27.805 14417.920 - 14518.745: 86.2015% ( 41) 00:07:27.805 14518.745 - 14619.569: 86.4797% ( 39) 00:07:27.805 14619.569 - 14720.394: 86.7366% ( 36) 00:07:27.805 14720.394 - 14821.218: 86.9578% ( 31) 00:07:27.805 14821.218 - 14922.043: 87.2574% ( 42) 00:07:27.805 14922.043 - 15022.868: 87.5357% ( 39) 00:07:27.805 15022.868 - 15123.692: 87.8781% ( 48) 00:07:27.805 15123.692 - 15224.517: 88.1992% ( 45) 00:07:27.805 15224.517 - 15325.342: 88.5345% ( 47) 00:07:27.805 15325.342 - 15426.166: 88.9127% ( 53) 00:07:27.805 15426.166 - 15526.991: 89.3051% ( 55) 00:07:27.805 15526.991 - 15627.815: 89.7760% ( 66) 00:07:27.805 15627.815 - 15728.640: 90.2611% ( 68) 00:07:27.805 15728.640 - 15829.465: 90.6821% ( 59) 00:07:27.805 15829.465 - 15930.289: 91.1815% ( 70) 00:07:27.805 15930.289 - 16031.114: 91.6881% ( 71) 00:07:27.805 16031.114 - 16131.938: 92.1376% ( 63) 00:07:27.805 16131.938 - 16232.763: 92.4872% ( 49) 00:07:27.805 16232.763 - 16333.588: 92.8368% ( 49) 00:07:27.805 16333.588 - 16434.412: 93.1935% ( 50) 00:07:27.805 16434.412 - 16535.237: 93.5146% ( 45) 00:07:27.805 16535.237 - 16636.062: 93.8213% ( 43) 00:07:27.805 16636.062 - 16736.886: 94.1139% ( 41) 00:07:27.805 16736.886 - 16837.711: 94.3422% ( 32) 00:07:27.805 16837.711 - 16938.535: 94.5705% ( 32) 00:07:27.805 16938.535 - 17039.360: 94.8202% ( 35) 00:07:27.805 17039.360 - 17140.185: 95.0557% ( 33) 00:07:27.805 17140.185 - 17241.009: 95.2840% ( 32) 00:07:27.805 17241.009 - 17341.834: 95.5622% ( 39) 00:07:27.805 17341.834 - 17442.658: 95.8191% ( 36) 00:07:27.805 17442.658 - 17543.483: 96.0545% ( 33) 00:07:27.805 17543.483 - 17644.308: 96.2757% ( 31) 00:07:27.805 17644.308 - 17745.132: 96.4969% ( 31) 00:07:27.805 17745.132 - 17845.957: 96.7252% ( 32) 00:07:27.805 17845.957 - 17946.782: 96.9107% ( 26) 00:07:27.805 17946.782 - 18047.606: 97.0748% ( 23) 00:07:27.805 18047.606 - 18148.431: 97.2531% ( 25) 00:07:27.805 18148.431 - 18249.255: 97.4672% ( 30) 00:07:27.805 18249.255 - 18350.080: 97.6455% ( 25) 00:07:27.805 18350.080 - 18450.905: 97.8311% ( 26) 00:07:27.805 18450.905 - 18551.729: 98.0094% ( 25) 00:07:27.805 18551.729 - 18652.554: 98.1592% ( 21) 00:07:27.805 18652.554 - 18753.378: 98.2449% ( 12) 00:07:27.805 18753.378 - 18854.203: 98.3091% ( 9) 00:07:27.805 18854.203 - 18955.028: 98.3733% ( 9) 00:07:27.805 18955.028 - 19055.852: 98.4304% ( 8) 00:07:27.805 19055.852 - 19156.677: 98.4946% ( 9) 00:07:27.805 19156.677 - 19257.502: 98.5731% ( 11) 00:07:27.805 19257.502 - 19358.326: 98.6301% ( 8) 00:07:27.805 19358.326 - 19459.151: 98.6729% ( 6) 00:07:27.805 19459.151 - 19559.975: 98.7229% ( 7) 00:07:27.805 19559.975 - 19660.800: 98.7586% ( 5) 00:07:27.805 19660.800 - 19761.625: 98.8014% ( 6) 00:07:27.805 19761.625 - 19862.449: 98.8513% ( 7) 00:07:27.805 19862.449 - 19963.274: 98.8870% ( 5) 00:07:27.805 19963.274 - 20064.098: 98.9369% ( 7) 00:07:27.805 20064.098 - 20164.923: 98.9797% ( 6) 00:07:27.805 20164.923 - 20265.748: 99.0297% ( 7) 00:07:27.805 20265.748 - 20366.572: 99.0725% ( 6) 00:07:27.805 20366.572 - 20467.397: 99.0868% ( 2) 00:07:27.805 27827.594 - 28029.243: 99.1153% ( 4) 00:07:27.805 28029.243 - 28230.892: 99.1795% ( 9) 00:07:27.805 28230.892 - 28432.542: 99.2366% ( 8) 00:07:27.805 28432.542 - 28634.191: 99.3008% ( 9) 00:07:27.805 28634.191 - 28835.840: 99.3507% ( 7) 00:07:27.805 28835.840 - 29037.489: 99.4078% ( 8) 00:07:27.805 29037.489 - 29239.138: 99.4720% ( 9) 00:07:27.805 29239.138 - 29440.788: 99.5362% ( 9) 00:07:27.805 29440.788 - 29642.437: 99.5434% ( 1) 00:07:27.805 38111.705 - 38313.354: 99.6005% ( 8) 00:07:27.805 38313.354 - 38515.003: 99.6575% ( 8) 00:07:27.805 38515.003 - 38716.652: 99.7146% ( 8) 00:07:27.805 38716.652 - 38918.302: 99.7717% ( 8) 00:07:27.805 38918.302 - 39119.951: 99.8359% ( 9) 00:07:27.805 39119.951 - 39321.600: 99.9001% ( 9) 00:07:27.805 39321.600 - 39523.249: 99.9572% ( 8) 00:07:27.805 39523.249 - 39724.898: 100.0000% ( 6) 00:07:27.805 00:07:27.805 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.805 ============================================================================== 00:07:27.805 Range in us Cumulative IO count 00:07:27.805 5494.942 - 5520.148: 0.0428% ( 6) 00:07:27.805 5520.148 - 5545.354: 0.1641% ( 17) 00:07:27.805 5545.354 - 5570.560: 0.4495% ( 40) 00:07:27.805 5570.560 - 5595.766: 0.9632% ( 72) 00:07:27.805 5595.766 - 5620.972: 1.6410% ( 95) 00:07:27.805 5620.972 - 5646.178: 2.5542% ( 128) 00:07:27.805 5646.178 - 5671.385: 3.4033% ( 119) 00:07:27.805 5671.385 - 5696.591: 4.5020% ( 154) 00:07:27.805 5696.591 - 5721.797: 5.5508% ( 147) 00:07:27.805 5721.797 - 5747.003: 6.6638% ( 156) 00:07:27.805 5747.003 - 5772.209: 7.8268% ( 163) 00:07:27.805 5772.209 - 5797.415: 9.1752% ( 189) 00:07:27.805 5797.415 - 5822.622: 10.3453% ( 164) 00:07:27.805 5822.622 - 5847.828: 11.7080% ( 191) 00:07:27.805 5847.828 - 5873.034: 12.9352% ( 172) 00:07:27.805 5873.034 - 5898.240: 14.1909% ( 176) 00:07:27.805 5898.240 - 5923.446: 15.5394% ( 189) 00:07:27.805 5923.446 - 5948.652: 16.8807% ( 188) 00:07:27.805 5948.652 - 5973.858: 18.2506% ( 192) 00:07:27.805 5973.858 - 5999.065: 19.6989% ( 203) 00:07:27.805 5999.065 - 6024.271: 20.9974% ( 182) 00:07:27.805 6024.271 - 6049.477: 22.3958% ( 196) 00:07:27.805 6049.477 - 6074.683: 23.7800% ( 194) 00:07:27.805 6074.683 - 6099.889: 25.1998% ( 199) 00:07:27.805 6099.889 - 6125.095: 26.5126% ( 184) 00:07:27.805 6125.095 - 6150.302: 27.9110% ( 196) 00:07:27.805 6150.302 - 6175.508: 29.3878% ( 207) 00:07:27.805 6175.508 - 6200.714: 30.7506% ( 191) 00:07:27.805 6200.714 - 6225.920: 32.2346% ( 208) 00:07:27.805 6225.920 - 6251.126: 33.6330% ( 196) 00:07:27.805 6251.126 - 6276.332: 35.0885% ( 204) 00:07:27.805 6276.332 - 6301.538: 36.5154% ( 200) 00:07:27.805 6301.538 - 6326.745: 37.8995% ( 194) 00:07:27.805 6326.745 - 6351.951: 39.2979% ( 196) 00:07:27.805 6351.951 - 6377.157: 40.8105% ( 212) 00:07:27.805 6377.157 - 6402.363: 42.1946% ( 194) 00:07:27.805 6402.363 - 6427.569: 43.6002% ( 197) 00:07:27.805 6427.569 - 6452.775: 45.0342% ( 201) 00:07:27.805 6452.775 - 6503.188: 47.9309% ( 406) 00:07:27.805 6503.188 - 6553.600: 50.6279% ( 378) 00:07:27.805 6553.600 - 6604.012: 52.6898% ( 289) 00:07:27.805 6604.012 - 6654.425: 54.0811% ( 195) 00:07:27.805 6654.425 - 6704.837: 55.0799% ( 140) 00:07:27.805 6704.837 - 6755.249: 55.8790% ( 112) 00:07:27.805 6755.249 - 6805.662: 56.4498% ( 80) 00:07:27.805 6805.662 - 6856.074: 56.8493% ( 56) 00:07:27.805 6856.074 - 6906.486: 57.2417% ( 55) 00:07:27.805 6906.486 - 6956.898: 57.5485% ( 43) 00:07:27.805 6956.898 - 7007.311: 57.7483% ( 28) 00:07:27.805 7007.311 - 7057.723: 57.9695% ( 31) 00:07:27.805 7057.723 - 7108.135: 58.1621% ( 27) 00:07:27.805 7108.135 - 7158.548: 58.3262% ( 23) 00:07:27.805 7158.548 - 7208.960: 58.4974% ( 24) 00:07:27.805 7208.960 - 7259.372: 58.6473% ( 21) 00:07:27.805 7259.372 - 7309.785: 58.7757% ( 18) 00:07:27.805 7309.785 - 7360.197: 58.9112% ( 19) 00:07:27.805 7360.197 - 7410.609: 59.0325% ( 17) 00:07:27.805 7410.609 - 7461.022: 59.1681% ( 19) 00:07:27.805 7461.022 - 7511.434: 59.3108% ( 20) 00:07:27.805 7511.434 - 7561.846: 59.4463% ( 19) 00:07:27.805 7561.846 - 7612.258: 59.6461% ( 28) 00:07:27.805 7612.258 - 7662.671: 59.8245% ( 25) 00:07:27.805 7662.671 - 7713.083: 59.9814% ( 22) 00:07:27.805 7713.083 - 7763.495: 60.1170% ( 19) 00:07:27.805 7763.495 - 7813.908: 60.2740% ( 22) 00:07:27.805 7813.908 - 7864.320: 60.4238% ( 21) 00:07:27.805 7864.320 - 7914.732: 60.5308% ( 15) 00:07:27.805 7914.732 - 7965.145: 60.6878% ( 22) 00:07:27.805 7965.145 - 8015.557: 60.9161% ( 32) 00:07:27.805 8015.557 - 8065.969: 61.0945% ( 25) 00:07:27.806 8065.969 - 8116.382: 61.3156% ( 31) 00:07:27.806 8116.382 - 8166.794: 61.5939% ( 39) 00:07:27.806 8166.794 - 8217.206: 61.8222% ( 32) 00:07:27.806 8217.206 - 8267.618: 62.0791% ( 36) 00:07:27.806 8267.618 - 8318.031: 62.3716% ( 41) 00:07:27.806 8318.031 - 8368.443: 62.5928% ( 31) 00:07:27.806 8368.443 - 8418.855: 62.8139% ( 31) 00:07:27.806 8418.855 - 8469.268: 63.1207% ( 43) 00:07:27.806 8469.268 - 8519.680: 63.4989% ( 53) 00:07:27.806 8519.680 - 8570.092: 63.8271% ( 46) 00:07:27.806 8570.092 - 8620.505: 64.0839% ( 36) 00:07:27.806 8620.505 - 8670.917: 64.3265% ( 34) 00:07:27.806 8670.917 - 8721.329: 64.5405% ( 30) 00:07:27.806 8721.329 - 8771.742: 64.7474% ( 29) 00:07:27.806 8771.742 - 8822.154: 64.9258% ( 25) 00:07:27.806 8822.154 - 8872.566: 65.1612% ( 33) 00:07:27.806 8872.566 - 8922.978: 65.3253% ( 23) 00:07:27.806 8922.978 - 8973.391: 65.5037% ( 25) 00:07:27.806 8973.391 - 9023.803: 65.6678% ( 23) 00:07:27.806 9023.803 - 9074.215: 65.8176% ( 21) 00:07:27.806 9074.215 - 9124.628: 65.9817% ( 23) 00:07:27.806 9124.628 - 9175.040: 66.0674% ( 12) 00:07:27.806 9175.040 - 9225.452: 66.2100% ( 20) 00:07:27.806 9225.452 - 9275.865: 66.3670% ( 22) 00:07:27.806 9275.865 - 9326.277: 66.5240% ( 22) 00:07:27.806 9326.277 - 9376.689: 66.6310% ( 15) 00:07:27.806 9376.689 - 9427.102: 66.7737% ( 20) 00:07:27.806 9427.102 - 9477.514: 66.8878% ( 16) 00:07:27.806 9477.514 - 9527.926: 67.0091% ( 17) 00:07:27.806 9527.926 - 9578.338: 67.0947% ( 12) 00:07:27.806 9578.338 - 9628.751: 67.2089% ( 16) 00:07:27.806 9628.751 - 9679.163: 67.2945% ( 12) 00:07:27.806 9679.163 - 9729.575: 67.3801% ( 12) 00:07:27.806 9729.575 - 9779.988: 67.5228% ( 20) 00:07:27.806 9779.988 - 9830.400: 67.6655% ( 20) 00:07:27.806 9830.400 - 9880.812: 67.7440% ( 11) 00:07:27.806 9880.812 - 9931.225: 67.8368% ( 13) 00:07:27.806 9931.225 - 9981.637: 67.9509% ( 16) 00:07:27.806 9981.637 - 10032.049: 68.0508% ( 14) 00:07:27.806 10032.049 - 10082.462: 68.1650% ( 16) 00:07:27.806 10082.462 - 10132.874: 68.2506% ( 12) 00:07:27.806 10132.874 - 10183.286: 68.3647% ( 16) 00:07:27.806 10183.286 - 10233.698: 68.4575% ( 13) 00:07:27.806 10233.698 - 10284.111: 68.5431% ( 12) 00:07:27.806 10284.111 - 10334.523: 68.6715% ( 18) 00:07:27.806 10334.523 - 10384.935: 68.7785% ( 15) 00:07:27.806 10384.935 - 10435.348: 68.8856% ( 15) 00:07:27.806 10435.348 - 10485.760: 69.0068% ( 17) 00:07:27.806 10485.760 - 10536.172: 69.0853% ( 11) 00:07:27.806 10536.172 - 10586.585: 69.1852% ( 14) 00:07:27.806 10586.585 - 10636.997: 69.2637% ( 11) 00:07:27.806 10636.997 - 10687.409: 69.3707% ( 15) 00:07:27.806 10687.409 - 10737.822: 69.4635% ( 13) 00:07:27.806 10737.822 - 10788.234: 69.5705% ( 15) 00:07:27.806 10788.234 - 10838.646: 69.6632% ( 13) 00:07:27.806 10838.646 - 10889.058: 69.7631% ( 14) 00:07:27.806 10889.058 - 10939.471: 69.8701% ( 15) 00:07:27.806 10939.471 - 10989.883: 69.9986% ( 18) 00:07:27.806 10989.883 - 11040.295: 70.1270% ( 18) 00:07:27.806 11040.295 - 11090.708: 70.2768% ( 21) 00:07:27.806 11090.708 - 11141.120: 70.4267% ( 21) 00:07:27.806 11141.120 - 11191.532: 70.6478% ( 31) 00:07:27.806 11191.532 - 11241.945: 70.8547% ( 29) 00:07:27.806 11241.945 - 11292.357: 71.0331% ( 25) 00:07:27.806 11292.357 - 11342.769: 71.3042% ( 38) 00:07:27.806 11342.769 - 11393.182: 71.4612% ( 22) 00:07:27.806 11393.182 - 11443.594: 71.6610% ( 28) 00:07:27.806 11443.594 - 11494.006: 71.8607% ( 28) 00:07:27.806 11494.006 - 11544.418: 72.0819% ( 31) 00:07:27.806 11544.418 - 11594.831: 72.3031% ( 31) 00:07:27.806 11594.831 - 11645.243: 72.4957% ( 27) 00:07:27.806 11645.243 - 11695.655: 72.7026% ( 29) 00:07:27.806 11695.655 - 11746.068: 73.0094% ( 43) 00:07:27.806 11746.068 - 11796.480: 73.3091% ( 42) 00:07:27.806 11796.480 - 11846.892: 73.5160% ( 29) 00:07:27.806 11846.892 - 11897.305: 73.8442% ( 46) 00:07:27.806 11897.305 - 11947.717: 74.1010% ( 36) 00:07:27.806 11947.717 - 11998.129: 74.3507% ( 35) 00:07:27.806 11998.129 - 12048.542: 74.6932% ( 48) 00:07:27.806 12048.542 - 12098.954: 75.0214% ( 46) 00:07:27.806 12098.954 - 12149.366: 75.3211% ( 42) 00:07:27.806 12149.366 - 12199.778: 75.5850% ( 37) 00:07:27.806 12199.778 - 12250.191: 75.9632% ( 53) 00:07:27.806 12250.191 - 12300.603: 76.1986% ( 33) 00:07:27.806 12300.603 - 12351.015: 76.5839% ( 54) 00:07:27.806 12351.015 - 12401.428: 76.9834% ( 56) 00:07:27.806 12401.428 - 12451.840: 77.2974% ( 44) 00:07:27.806 12451.840 - 12502.252: 77.6113% ( 44) 00:07:27.806 12502.252 - 12552.665: 77.8967% ( 40) 00:07:27.806 12552.665 - 12603.077: 78.1321% ( 33) 00:07:27.806 12603.077 - 12653.489: 78.4389% ( 43) 00:07:27.806 12653.489 - 12703.902: 78.7885% ( 49) 00:07:27.806 12703.902 - 12754.314: 79.1453% ( 50) 00:07:27.806 12754.314 - 12804.726: 79.4806% ( 47) 00:07:27.806 12804.726 - 12855.138: 79.7803% ( 42) 00:07:27.806 12855.138 - 12905.551: 80.1156% ( 47) 00:07:27.806 12905.551 - 13006.375: 80.5722% ( 64) 00:07:27.806 13006.375 - 13107.200: 81.2215% ( 91) 00:07:27.806 13107.200 - 13208.025: 81.7566% ( 75) 00:07:27.806 13208.025 - 13308.849: 82.1775% ( 59) 00:07:27.806 13308.849 - 13409.674: 82.7055% ( 74) 00:07:27.806 13409.674 - 13510.498: 83.3904% ( 96) 00:07:27.806 13510.498 - 13611.323: 83.8114% ( 59) 00:07:27.806 13611.323 - 13712.148: 84.0896% ( 39) 00:07:27.806 13712.148 - 13812.972: 84.4463% ( 50) 00:07:27.806 13812.972 - 13913.797: 84.7317% ( 40) 00:07:27.806 13913.797 - 14014.622: 84.9814% ( 35) 00:07:27.806 14014.622 - 14115.446: 85.2169% ( 33) 00:07:27.806 14115.446 - 14216.271: 85.6307% ( 58) 00:07:27.806 14216.271 - 14317.095: 85.7734% ( 20) 00:07:27.806 14317.095 - 14417.920: 86.0303% ( 36) 00:07:27.806 14417.920 - 14518.745: 86.2871% ( 36) 00:07:27.806 14518.745 - 14619.569: 86.4940% ( 29) 00:07:27.806 14619.569 - 14720.394: 86.7937% ( 42) 00:07:27.806 14720.394 - 14821.218: 87.1290% ( 47) 00:07:27.806 14821.218 - 14922.043: 87.3145% ( 26) 00:07:27.806 14922.043 - 15022.868: 87.5357% ( 31) 00:07:27.806 15022.868 - 15123.692: 87.7997% ( 37) 00:07:27.806 15123.692 - 15224.517: 88.0494% ( 35) 00:07:27.806 15224.517 - 15325.342: 88.4275% ( 53) 00:07:27.806 15325.342 - 15426.166: 88.7985% ( 52) 00:07:27.806 15426.166 - 15526.991: 89.1196% ( 45) 00:07:27.806 15526.991 - 15627.815: 89.5405% ( 59) 00:07:27.806 15627.815 - 15728.640: 89.8687% ( 46) 00:07:27.806 15728.640 - 15829.465: 90.1684% ( 42) 00:07:27.806 15829.465 - 15930.289: 90.6963% ( 74) 00:07:27.806 15930.289 - 16031.114: 91.0745% ( 53) 00:07:27.806 16031.114 - 16131.938: 91.6453% ( 80) 00:07:27.806 16131.938 - 16232.763: 92.0591% ( 58) 00:07:27.806 16232.763 - 16333.588: 92.5300% ( 66) 00:07:27.806 16333.588 - 16434.412: 93.0793% ( 77) 00:07:27.806 16434.412 - 16535.237: 93.3719% ( 41) 00:07:27.806 16535.237 - 16636.062: 93.6644% ( 41) 00:07:27.806 16636.062 - 16736.886: 94.0211% ( 50) 00:07:27.806 16736.886 - 16837.711: 94.3850% ( 51) 00:07:27.806 16837.711 - 16938.535: 94.7631% ( 53) 00:07:27.806 16938.535 - 17039.360: 94.9914% ( 32) 00:07:27.806 17039.360 - 17140.185: 95.2269% ( 33) 00:07:27.806 17140.185 - 17241.009: 95.3767% ( 21) 00:07:27.806 17241.009 - 17341.834: 95.5836% ( 29) 00:07:27.806 17341.834 - 17442.658: 95.7120% ( 18) 00:07:27.806 17442.658 - 17543.483: 95.8333% ( 17) 00:07:27.806 17543.483 - 17644.308: 95.9475% ( 16) 00:07:27.806 17644.308 - 17745.132: 96.0759% ( 18) 00:07:27.806 17745.132 - 17845.957: 96.4112% ( 47) 00:07:27.806 17845.957 - 17946.782: 96.6681% ( 36) 00:07:27.806 17946.782 - 18047.606: 96.8179% ( 21) 00:07:27.806 18047.606 - 18148.431: 96.8964% ( 11) 00:07:27.806 18148.431 - 18249.255: 97.0676% ( 24) 00:07:27.806 18249.255 - 18350.080: 97.2389% ( 24) 00:07:27.806 18350.080 - 18450.905: 97.3530% ( 16) 00:07:27.806 18450.905 - 18551.729: 97.6313% ( 39) 00:07:27.806 18551.729 - 18652.554: 97.7882% ( 22) 00:07:27.806 18652.554 - 18753.378: 97.9309% ( 20) 00:07:27.806 18753.378 - 18854.203: 98.0451% ( 16) 00:07:27.806 18854.203 - 18955.028: 98.1236% ( 11) 00:07:27.806 18955.028 - 19055.852: 98.2449% ( 17) 00:07:27.806 19055.852 - 19156.677: 98.3305% ( 12) 00:07:27.806 19156.677 - 19257.502: 98.4803% ( 21) 00:07:27.806 19257.502 - 19358.326: 98.5731% ( 13) 00:07:27.806 19358.326 - 19459.151: 98.6301% ( 8) 00:07:27.806 19459.151 - 19559.975: 98.7300% ( 14) 00:07:27.806 19559.975 - 19660.800: 98.8014% ( 10) 00:07:27.806 19660.800 - 19761.625: 98.8870% ( 12) 00:07:27.806 19761.625 - 19862.449: 98.9655% ( 11) 00:07:27.806 19862.449 - 19963.274: 98.9869% ( 3) 00:07:27.806 19963.274 - 20064.098: 99.0154% ( 4) 00:07:27.806 20064.098 - 20164.923: 99.0511% ( 5) 00:07:27.806 20164.923 - 20265.748: 99.0868% ( 5) 00:07:27.806 26416.049 - 26617.698: 99.0939% ( 1) 00:07:27.806 26617.698 - 26819.348: 99.1652% ( 10) 00:07:27.806 26819.348 - 27020.997: 99.2152% ( 7) 00:07:27.806 27020.997 - 27222.646: 99.2723% ( 8) 00:07:27.806 27222.646 - 27424.295: 99.3365% ( 9) 00:07:27.806 27424.295 - 27625.945: 99.3864% ( 7) 00:07:27.806 27625.945 - 27827.594: 99.4364% ( 7) 00:07:27.806 27827.594 - 28029.243: 99.4863% ( 7) 00:07:27.806 28029.243 - 28230.892: 99.5006% ( 2) 00:07:27.806 28230.892 - 28432.542: 99.5434% ( 6) 00:07:27.806 36700.160 - 36901.809: 99.5862% ( 6) 00:07:27.806 36901.809 - 37103.458: 99.5933% ( 1) 00:07:27.806 37103.458 - 37305.108: 99.6647% ( 10) 00:07:27.806 37305.108 - 37506.757: 99.7432% ( 11) 00:07:27.806 37506.757 - 37708.406: 99.8074% ( 9) 00:07:27.806 37708.406 - 37910.055: 99.8288% ( 3) 00:07:27.806 37910.055 - 38111.705: 99.8716% ( 6) 00:07:27.806 38111.705 - 38313.354: 99.9429% ( 10) 00:07:27.806 38313.354 - 38515.003: 100.0000% ( 8) 00:07:27.806 00:07:27.807 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.807 ============================================================================== 00:07:27.807 Range in us Cumulative IO count 00:07:27.807 5570.560 - 5595.766: 0.0285% ( 4) 00:07:27.807 5595.766 - 5620.972: 0.1712% ( 20) 00:07:27.807 5620.972 - 5646.178: 0.3781% ( 29) 00:07:27.807 5646.178 - 5671.385: 0.7920% ( 58) 00:07:27.807 5671.385 - 5696.591: 1.5340% ( 104) 00:07:27.807 5696.591 - 5721.797: 2.5471% ( 142) 00:07:27.807 5721.797 - 5747.003: 3.7314% ( 166) 00:07:27.807 5747.003 - 5772.209: 4.9301% ( 168) 00:07:27.807 5772.209 - 5797.415: 6.1144% ( 166) 00:07:27.807 5797.415 - 5822.622: 7.4843% ( 192) 00:07:27.807 5822.622 - 5847.828: 8.8399% ( 190) 00:07:27.807 5847.828 - 5873.034: 10.3953% ( 218) 00:07:27.807 5873.034 - 5898.240: 11.8936% ( 210) 00:07:27.807 5898.240 - 5923.446: 13.5203% ( 228) 00:07:27.807 5923.446 - 5948.652: 15.0685% ( 217) 00:07:27.807 5948.652 - 5973.858: 16.6524% ( 222) 00:07:27.807 5973.858 - 5999.065: 18.2006% ( 217) 00:07:27.807 5999.065 - 6024.271: 19.8630% ( 233) 00:07:27.807 6024.271 - 6049.477: 21.5111% ( 231) 00:07:27.807 6049.477 - 6074.683: 23.1450% ( 229) 00:07:27.807 6074.683 - 6099.889: 24.7574% ( 226) 00:07:27.807 6099.889 - 6125.095: 26.4697% ( 240) 00:07:27.807 6125.095 - 6150.302: 28.1321% ( 233) 00:07:27.807 6150.302 - 6175.508: 29.7874% ( 232) 00:07:27.807 6175.508 - 6200.714: 31.4284% ( 230) 00:07:27.807 6200.714 - 6225.920: 33.0908% ( 233) 00:07:27.807 6225.920 - 6251.126: 34.7175% ( 228) 00:07:27.807 6251.126 - 6276.332: 36.3656% ( 231) 00:07:27.807 6276.332 - 6301.538: 38.0708% ( 239) 00:07:27.807 6301.538 - 6326.745: 39.7118% ( 230) 00:07:27.807 6326.745 - 6351.951: 41.3884% ( 235) 00:07:27.807 6351.951 - 6377.157: 43.0579% ( 234) 00:07:27.807 6377.157 - 6402.363: 44.7489% ( 237) 00:07:27.807 6402.363 - 6427.569: 46.4112% ( 233) 00:07:27.807 6427.569 - 6452.775: 48.0094% ( 224) 00:07:27.807 6452.775 - 6503.188: 50.6921% ( 376) 00:07:27.807 6503.188 - 6553.600: 52.5257% ( 257) 00:07:27.807 6553.600 - 6604.012: 53.8955% ( 192) 00:07:27.807 6604.012 - 6654.425: 54.8445% ( 133) 00:07:27.807 6654.425 - 6704.837: 55.5151% ( 94) 00:07:27.807 6704.837 - 6755.249: 56.0146% ( 70) 00:07:27.807 6755.249 - 6805.662: 56.4640% ( 63) 00:07:27.807 6805.662 - 6856.074: 56.8279% ( 51) 00:07:27.807 6856.074 - 6906.486: 57.1918% ( 51) 00:07:27.807 6906.486 - 6956.898: 57.5271% ( 47) 00:07:27.807 6956.898 - 7007.311: 57.7840% ( 36) 00:07:27.807 7007.311 - 7057.723: 57.9909% ( 29) 00:07:27.807 7057.723 - 7108.135: 58.2120% ( 31) 00:07:27.807 7108.135 - 7158.548: 58.4047% ( 27) 00:07:27.807 7158.548 - 7208.960: 58.6045% ( 28) 00:07:27.807 7208.960 - 7259.372: 58.8114% ( 29) 00:07:27.807 7259.372 - 7309.785: 58.9755% ( 23) 00:07:27.807 7309.785 - 7360.197: 59.1752% ( 28) 00:07:27.807 7360.197 - 7410.609: 59.3607% ( 26) 00:07:27.807 7410.609 - 7461.022: 59.5391% ( 25) 00:07:27.807 7461.022 - 7511.434: 59.6818% ( 20) 00:07:27.807 7511.434 - 7561.846: 59.8031% ( 17) 00:07:27.807 7561.846 - 7612.258: 59.9101% ( 15) 00:07:27.807 7612.258 - 7662.671: 60.0029% ( 13) 00:07:27.807 7662.671 - 7713.083: 60.1527% ( 21) 00:07:27.807 7713.083 - 7763.495: 60.2811% ( 18) 00:07:27.807 7763.495 - 7813.908: 60.4167% ( 19) 00:07:27.807 7813.908 - 7864.320: 60.5879% ( 24) 00:07:27.807 7864.320 - 7914.732: 60.7377% ( 21) 00:07:27.807 7914.732 - 7965.145: 60.9232% ( 26) 00:07:27.807 7965.145 - 8015.557: 61.1373% ( 30) 00:07:27.807 8015.557 - 8065.969: 61.3941% ( 36) 00:07:27.807 8065.969 - 8116.382: 61.6866% ( 41) 00:07:27.807 8116.382 - 8166.794: 62.0006% ( 44) 00:07:27.807 8166.794 - 8217.206: 62.2717% ( 38) 00:07:27.807 8217.206 - 8267.618: 62.5642% ( 41) 00:07:27.807 8267.618 - 8318.031: 62.8353% ( 38) 00:07:27.807 8318.031 - 8368.443: 63.1421% ( 43) 00:07:27.807 8368.443 - 8418.855: 63.3918% ( 35) 00:07:27.807 8418.855 - 8469.268: 63.6558% ( 37) 00:07:27.807 8469.268 - 8519.680: 63.8841% ( 32) 00:07:27.807 8519.680 - 8570.092: 64.1338% ( 35) 00:07:27.807 8570.092 - 8620.505: 64.3622% ( 32) 00:07:27.807 8620.505 - 8670.917: 64.5762% ( 30) 00:07:27.807 8670.917 - 8721.329: 64.7974% ( 31) 00:07:27.807 8721.329 - 8771.742: 64.9900% ( 27) 00:07:27.807 8771.742 - 8822.154: 65.1398% ( 21) 00:07:27.807 8822.154 - 8872.566: 65.3182% ( 25) 00:07:27.807 8872.566 - 8922.978: 65.4752% ( 22) 00:07:27.807 8922.978 - 8973.391: 65.5679% ( 13) 00:07:27.807 8973.391 - 9023.803: 65.6535% ( 12) 00:07:27.807 9023.803 - 9074.215: 65.7677% ( 16) 00:07:27.807 9074.215 - 9124.628: 65.8890% ( 17) 00:07:27.807 9124.628 - 9175.040: 66.0103% ( 17) 00:07:27.807 9175.040 - 9225.452: 66.1173% ( 15) 00:07:27.807 9225.452 - 9275.865: 66.2243% ( 15) 00:07:27.807 9275.865 - 9326.277: 66.3313% ( 15) 00:07:27.807 9326.277 - 9376.689: 66.4598% ( 18) 00:07:27.807 9376.689 - 9427.102: 66.6310% ( 24) 00:07:27.807 9427.102 - 9477.514: 66.7594% ( 18) 00:07:27.807 9477.514 - 9527.926: 66.8878% ( 18) 00:07:27.807 9527.926 - 9578.338: 67.0091% ( 17) 00:07:27.807 9578.338 - 9628.751: 67.1518% ( 20) 00:07:27.807 9628.751 - 9679.163: 67.2588% ( 15) 00:07:27.807 9679.163 - 9729.575: 67.3587% ( 14) 00:07:27.807 9729.575 - 9779.988: 67.4586% ( 14) 00:07:27.807 9779.988 - 9830.400: 67.5728% ( 16) 00:07:27.807 9830.400 - 9880.812: 67.6727% ( 14) 00:07:27.807 9880.812 - 9931.225: 67.7654% ( 13) 00:07:27.807 9931.225 - 9981.637: 67.8439% ( 11) 00:07:27.807 9981.637 - 10032.049: 67.9366% ( 13) 00:07:27.807 10032.049 - 10082.462: 68.0579% ( 17) 00:07:27.807 10082.462 - 10132.874: 68.1507% ( 13) 00:07:27.807 10132.874 - 10183.286: 68.2220% ( 10) 00:07:27.807 10183.286 - 10233.698: 68.3219% ( 14) 00:07:27.807 10233.698 - 10284.111: 68.4147% ( 13) 00:07:27.807 10284.111 - 10334.523: 68.5288% ( 16) 00:07:27.807 10334.523 - 10384.935: 68.6216% ( 13) 00:07:27.807 10384.935 - 10435.348: 68.7571% ( 19) 00:07:27.807 10435.348 - 10485.760: 68.8570% ( 14) 00:07:27.807 10485.760 - 10536.172: 68.9783% ( 17) 00:07:27.807 10536.172 - 10586.585: 69.0853% ( 15) 00:07:27.807 10586.585 - 10636.997: 69.2138% ( 18) 00:07:27.807 10636.997 - 10687.409: 69.3707% ( 22) 00:07:27.807 10687.409 - 10737.822: 69.5205% ( 21) 00:07:27.807 10737.822 - 10788.234: 69.6418% ( 17) 00:07:27.807 10788.234 - 10838.646: 69.7845% ( 20) 00:07:27.807 10838.646 - 10889.058: 69.9629% ( 25) 00:07:27.807 10889.058 - 10939.471: 70.2055% ( 34) 00:07:27.807 10939.471 - 10989.883: 70.3767% ( 24) 00:07:27.807 10989.883 - 11040.295: 70.5123% ( 19) 00:07:27.807 11040.295 - 11090.708: 70.6550% ( 20) 00:07:27.807 11090.708 - 11141.120: 70.7763% ( 17) 00:07:27.807 11141.120 - 11191.532: 70.9047% ( 18) 00:07:27.807 11191.532 - 11241.945: 71.0545% ( 21) 00:07:27.807 11241.945 - 11292.357: 71.1901% ( 19) 00:07:27.807 11292.357 - 11342.769: 71.3399% ( 21) 00:07:27.807 11342.769 - 11393.182: 71.5111% ( 24) 00:07:27.807 11393.182 - 11443.594: 71.7537% ( 34) 00:07:27.807 11443.594 - 11494.006: 72.0034% ( 35) 00:07:27.807 11494.006 - 11544.418: 72.2246% ( 31) 00:07:27.807 11544.418 - 11594.831: 72.4458% ( 31) 00:07:27.807 11594.831 - 11645.243: 72.6812% ( 33) 00:07:27.807 11645.243 - 11695.655: 72.9167% ( 33) 00:07:27.807 11695.655 - 11746.068: 73.1878% ( 38) 00:07:27.807 11746.068 - 11796.480: 73.4375% ( 35) 00:07:27.807 11796.480 - 11846.892: 73.7015% ( 37) 00:07:27.807 11846.892 - 11897.305: 73.9441% ( 34) 00:07:27.807 11897.305 - 11947.717: 74.1724% ( 32) 00:07:27.807 11947.717 - 11998.129: 74.4078% ( 33) 00:07:27.807 11998.129 - 12048.542: 74.6789% ( 38) 00:07:27.807 12048.542 - 12098.954: 74.9144% ( 33) 00:07:27.807 12098.954 - 12149.366: 75.2069% ( 41) 00:07:27.807 12149.366 - 12199.778: 75.5850% ( 53) 00:07:27.807 12199.778 - 12250.191: 75.8918% ( 43) 00:07:27.807 12250.191 - 12300.603: 76.2129% ( 45) 00:07:27.807 12300.603 - 12351.015: 76.5126% ( 42) 00:07:27.807 12351.015 - 12401.428: 76.7480% ( 33) 00:07:27.807 12401.428 - 12451.840: 77.0191% ( 38) 00:07:27.807 12451.840 - 12502.252: 77.3330% ( 44) 00:07:27.807 12502.252 - 12552.665: 77.6541% ( 45) 00:07:27.807 12552.665 - 12603.077: 77.9038% ( 35) 00:07:27.808 12603.077 - 12653.489: 78.2178% ( 44) 00:07:27.808 12653.489 - 12703.902: 78.5174% ( 42) 00:07:27.808 12703.902 - 12754.314: 78.8385% ( 45) 00:07:27.808 12754.314 - 12804.726: 79.1738% ( 47) 00:07:27.808 12804.726 - 12855.138: 79.5377% ( 51) 00:07:27.808 12855.138 - 12905.551: 79.8445% ( 43) 00:07:27.808 12905.551 - 13006.375: 80.4937% ( 91) 00:07:27.808 13006.375 - 13107.200: 80.9932% ( 70) 00:07:27.808 13107.200 - 13208.025: 81.4926% ( 70) 00:07:27.808 13208.025 - 13308.849: 81.9849% ( 69) 00:07:27.808 13308.849 - 13409.674: 82.5913% ( 85) 00:07:27.808 13409.674 - 13510.498: 83.1264% ( 75) 00:07:27.808 13510.498 - 13611.323: 83.6187% ( 69) 00:07:27.808 13611.323 - 13712.148: 83.9755% ( 50) 00:07:27.808 13712.148 - 13812.972: 84.3251% ( 49) 00:07:27.808 13812.972 - 13913.797: 84.6533% ( 46) 00:07:27.808 13913.797 - 14014.622: 84.9957% ( 48) 00:07:27.808 14014.622 - 14115.446: 85.4167% ( 59) 00:07:27.808 14115.446 - 14216.271: 85.8804% ( 65) 00:07:27.808 14216.271 - 14317.095: 86.2158% ( 47) 00:07:27.808 14317.095 - 14417.920: 86.5011% ( 40) 00:07:27.808 14417.920 - 14518.745: 86.8008% ( 42) 00:07:27.808 14518.745 - 14619.569: 87.1290% ( 46) 00:07:27.808 14619.569 - 14720.394: 87.4429% ( 44) 00:07:27.808 14720.394 - 14821.218: 87.8211% ( 53) 00:07:27.808 14821.218 - 14922.043: 88.1279% ( 43) 00:07:27.808 14922.043 - 15022.868: 88.3490% ( 31) 00:07:27.808 15022.868 - 15123.692: 88.5987% ( 35) 00:07:27.808 15123.692 - 15224.517: 88.8556% ( 36) 00:07:27.808 15224.517 - 15325.342: 89.0839% ( 32) 00:07:27.808 15325.342 - 15426.166: 89.3479% ( 37) 00:07:27.808 15426.166 - 15526.991: 89.6475% ( 42) 00:07:27.808 15526.991 - 15627.815: 89.9615% ( 44) 00:07:27.808 15627.815 - 15728.640: 90.3253% ( 51) 00:07:27.808 15728.640 - 15829.465: 90.6892% ( 51) 00:07:27.808 15829.465 - 15930.289: 91.0317% ( 48) 00:07:27.808 15930.289 - 16031.114: 91.4027% ( 52) 00:07:27.808 16031.114 - 16131.938: 91.7808% ( 53) 00:07:27.808 16131.938 - 16232.763: 92.1804% ( 56) 00:07:27.808 16232.763 - 16333.588: 92.5870% ( 57) 00:07:27.808 16333.588 - 16434.412: 92.8296% ( 34) 00:07:27.808 16434.412 - 16535.237: 93.0508% ( 31) 00:07:27.808 16535.237 - 16636.062: 93.3219% ( 38) 00:07:27.808 16636.062 - 16736.886: 93.6144% ( 41) 00:07:27.808 16736.886 - 16837.711: 93.9070% ( 41) 00:07:27.808 16837.711 - 16938.535: 94.1852% ( 39) 00:07:27.808 16938.535 - 17039.360: 94.4920% ( 43) 00:07:27.808 17039.360 - 17140.185: 94.7631% ( 38) 00:07:27.808 17140.185 - 17241.009: 94.9843% ( 31) 00:07:27.808 17241.009 - 17341.834: 95.2055% ( 31) 00:07:27.808 17341.834 - 17442.658: 95.4481% ( 34) 00:07:27.808 17442.658 - 17543.483: 95.6692% ( 31) 00:07:27.808 17543.483 - 17644.308: 95.8619% ( 27) 00:07:27.808 17644.308 - 17745.132: 96.0260% ( 23) 00:07:27.808 17745.132 - 17845.957: 96.1615% ( 19) 00:07:27.808 17845.957 - 17946.782: 96.2686% ( 15) 00:07:27.808 17946.782 - 18047.606: 96.4112% ( 20) 00:07:27.808 18047.606 - 18148.431: 96.6324% ( 31) 00:07:27.808 18148.431 - 18249.255: 96.7965% ( 23) 00:07:27.808 18249.255 - 18350.080: 97.0177% ( 31) 00:07:27.808 18350.080 - 18450.905: 97.2246% ( 29) 00:07:27.808 18450.905 - 18551.729: 97.5100% ( 40) 00:07:27.808 18551.729 - 18652.554: 97.7240% ( 30) 00:07:27.808 18652.554 - 18753.378: 97.9024% ( 25) 00:07:27.808 18753.378 - 18854.203: 98.0522% ( 21) 00:07:27.808 18854.203 - 18955.028: 98.1521% ( 14) 00:07:27.808 18955.028 - 19055.852: 98.2449% ( 13) 00:07:27.808 19055.852 - 19156.677: 98.3019% ( 8) 00:07:27.808 19156.677 - 19257.502: 98.3590% ( 8) 00:07:27.808 19257.502 - 19358.326: 98.4161% ( 8) 00:07:27.808 19358.326 - 19459.151: 98.4732% ( 8) 00:07:27.808 19459.151 - 19559.975: 98.5588% ( 12) 00:07:27.808 19559.975 - 19660.800: 98.6230% ( 9) 00:07:27.808 19660.800 - 19761.625: 98.6943% ( 10) 00:07:27.808 19761.625 - 19862.449: 98.7657% ( 10) 00:07:27.808 19862.449 - 19963.274: 98.8370% ( 10) 00:07:27.808 19963.274 - 20064.098: 98.9084% ( 10) 00:07:27.808 20064.098 - 20164.923: 98.9797% ( 10) 00:07:27.808 20164.923 - 20265.748: 99.0439% ( 9) 00:07:27.808 20265.748 - 20366.572: 99.0868% ( 6) 00:07:27.808 25105.329 - 25206.154: 99.1082% ( 3) 00:07:27.808 25206.154 - 25306.978: 99.1367% ( 4) 00:07:27.808 25306.978 - 25407.803: 99.1652% ( 4) 00:07:27.808 25407.803 - 25508.628: 99.1938% ( 4) 00:07:27.808 25508.628 - 25609.452: 99.2223% ( 4) 00:07:27.808 25609.452 - 25710.277: 99.2580% ( 5) 00:07:27.808 25710.277 - 25811.102: 99.2865% ( 4) 00:07:27.808 25811.102 - 26012.751: 99.3436% ( 8) 00:07:27.808 26012.751 - 26214.400: 99.4007% ( 8) 00:07:27.808 26214.400 - 26416.049: 99.4649% ( 9) 00:07:27.808 26416.049 - 26617.698: 99.5220% ( 8) 00:07:27.808 26617.698 - 26819.348: 99.5434% ( 3) 00:07:27.808 36095.212 - 36296.862: 99.5648% ( 3) 00:07:27.808 36296.862 - 36498.511: 99.6290% ( 9) 00:07:27.808 36498.511 - 36700.160: 99.6861% ( 8) 00:07:27.808 36700.160 - 36901.809: 99.7503% ( 9) 00:07:27.808 36901.809 - 37103.458: 99.8074% ( 8) 00:07:27.808 37103.458 - 37305.108: 99.8644% ( 8) 00:07:27.808 37305.108 - 37506.757: 99.9287% ( 9) 00:07:27.808 37506.757 - 37708.406: 99.9857% ( 8) 00:07:27.808 37708.406 - 37910.055: 100.0000% ( 2) 00:07:27.808 00:07:27.808 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.808 ============================================================================== 00:07:27.808 Range in us Cumulative IO count 00:07:27.808 5570.560 - 5595.766: 0.0214% ( 3) 00:07:27.808 5595.766 - 5620.972: 0.1498% ( 18) 00:07:27.808 5620.972 - 5646.178: 0.3068% ( 22) 00:07:27.808 5646.178 - 5671.385: 0.7349% ( 60) 00:07:27.808 5671.385 - 5696.591: 1.4769% ( 104) 00:07:27.808 5696.591 - 5721.797: 2.4543% ( 137) 00:07:27.808 5721.797 - 5747.003: 3.5174% ( 149) 00:07:27.808 5747.003 - 5772.209: 4.7660% ( 175) 00:07:27.808 5772.209 - 5797.415: 6.0930% ( 186) 00:07:27.808 5797.415 - 5822.622: 7.3630% ( 178) 00:07:27.808 5822.622 - 5847.828: 8.6615% ( 182) 00:07:27.808 5847.828 - 5873.034: 10.1670% ( 211) 00:07:27.808 5873.034 - 5898.240: 11.7794% ( 226) 00:07:27.808 5898.240 - 5923.446: 13.3704% ( 223) 00:07:27.808 5923.446 - 5948.652: 15.0685% ( 238) 00:07:27.808 5948.652 - 5973.858: 16.6453% ( 221) 00:07:27.808 5973.858 - 5999.065: 18.2363% ( 223) 00:07:27.808 5999.065 - 6024.271: 19.8131% ( 221) 00:07:27.808 6024.271 - 6049.477: 21.5325% ( 241) 00:07:27.808 6049.477 - 6074.683: 23.1521% ( 227) 00:07:27.808 6074.683 - 6099.889: 24.7360% ( 222) 00:07:27.808 6099.889 - 6125.095: 26.3841% ( 231) 00:07:27.808 6125.095 - 6150.302: 28.0180% ( 229) 00:07:27.808 6150.302 - 6175.508: 29.6233% ( 225) 00:07:27.808 6175.508 - 6200.714: 31.2785% ( 232) 00:07:27.808 6200.714 - 6225.920: 32.9267% ( 231) 00:07:27.808 6225.920 - 6251.126: 34.5320% ( 225) 00:07:27.808 6251.126 - 6276.332: 36.1801% ( 231) 00:07:27.808 6276.332 - 6301.538: 37.8496% ( 234) 00:07:27.808 6301.538 - 6326.745: 39.5334% ( 236) 00:07:27.808 6326.745 - 6351.951: 41.1958% ( 233) 00:07:27.808 6351.951 - 6377.157: 42.8296% ( 229) 00:07:27.808 6377.157 - 6402.363: 44.4920% ( 233) 00:07:27.808 6402.363 - 6427.569: 46.1758% ( 236) 00:07:27.808 6427.569 - 6452.775: 47.8382% ( 233) 00:07:27.808 6452.775 - 6503.188: 50.5850% ( 385) 00:07:27.808 6503.188 - 6553.600: 52.4615% ( 263) 00:07:27.808 6553.600 - 6604.012: 53.7457% ( 180) 00:07:27.808 6604.012 - 6654.425: 54.6804% ( 131) 00:07:27.808 6654.425 - 6704.837: 55.4224% ( 104) 00:07:27.808 6704.837 - 6755.249: 55.9147% ( 69) 00:07:27.808 6755.249 - 6805.662: 56.3285% ( 58) 00:07:27.808 6805.662 - 6856.074: 56.7566% ( 60) 00:07:27.808 6856.074 - 6906.486: 57.1561% ( 56) 00:07:27.808 6906.486 - 6956.898: 57.4629% ( 43) 00:07:27.808 6956.898 - 7007.311: 57.7340% ( 38) 00:07:27.808 7007.311 - 7057.723: 58.0265% ( 41) 00:07:27.808 7057.723 - 7108.135: 58.3405% ( 44) 00:07:27.808 7108.135 - 7158.548: 58.5830% ( 34) 00:07:27.808 7158.548 - 7208.960: 58.8613% ( 39) 00:07:27.808 7208.960 - 7259.372: 59.1110% ( 35) 00:07:27.808 7259.372 - 7309.785: 59.3465% ( 33) 00:07:27.808 7309.785 - 7360.197: 59.5819% ( 33) 00:07:27.808 7360.197 - 7410.609: 59.8174% ( 33) 00:07:27.808 7410.609 - 7461.022: 60.0813% ( 37) 00:07:27.808 7461.022 - 7511.434: 60.2882% ( 29) 00:07:27.808 7511.434 - 7561.846: 60.4666% ( 25) 00:07:27.808 7561.846 - 7612.258: 60.6236% ( 22) 00:07:27.808 7612.258 - 7662.671: 60.7591% ( 19) 00:07:27.808 7662.671 - 7713.083: 60.9518% ( 27) 00:07:27.808 7713.083 - 7763.495: 61.1515% ( 28) 00:07:27.808 7763.495 - 7813.908: 61.3370% ( 26) 00:07:27.808 7813.908 - 7864.320: 61.4869% ( 21) 00:07:27.808 7864.320 - 7914.732: 61.6367% ( 21) 00:07:27.808 7914.732 - 7965.145: 61.7580% ( 17) 00:07:27.808 7965.145 - 8015.557: 61.9007% ( 20) 00:07:27.808 8015.557 - 8065.969: 62.0434% ( 20) 00:07:27.808 8065.969 - 8116.382: 62.2075% ( 23) 00:07:27.808 8116.382 - 8166.794: 62.3502% ( 20) 00:07:27.808 8166.794 - 8217.206: 62.5071% ( 22) 00:07:27.808 8217.206 - 8267.618: 62.6712% ( 23) 00:07:27.808 8267.618 - 8318.031: 62.7854% ( 16) 00:07:27.808 8318.031 - 8368.443: 62.9352% ( 21) 00:07:27.808 8368.443 - 8418.855: 63.0850% ( 21) 00:07:27.808 8418.855 - 8469.268: 63.2491% ( 23) 00:07:27.808 8469.268 - 8519.680: 63.4346% ( 26) 00:07:27.808 8519.680 - 8570.092: 63.5702% ( 19) 00:07:27.808 8570.092 - 8620.505: 63.7272% ( 22) 00:07:27.808 8620.505 - 8670.917: 63.8770% ( 21) 00:07:27.808 8670.917 - 8721.329: 64.0197% ( 20) 00:07:27.808 8721.329 - 8771.742: 64.2052% ( 26) 00:07:27.808 8771.742 - 8822.154: 64.4620% ( 36) 00:07:27.808 8822.154 - 8872.566: 64.6689% ( 29) 00:07:27.808 8872.566 - 8922.978: 64.8830% ( 30) 00:07:27.808 8922.978 - 8973.391: 65.0970% ( 30) 00:07:27.808 8973.391 - 9023.803: 65.2754% ( 25) 00:07:27.809 9023.803 - 9074.215: 65.4538% ( 25) 00:07:27.809 9074.215 - 9124.628: 65.6250% ( 24) 00:07:27.809 9124.628 - 9175.040: 65.7748% ( 21) 00:07:27.809 9175.040 - 9225.452: 65.9247% ( 21) 00:07:27.809 9225.452 - 9275.865: 66.0816% ( 22) 00:07:27.809 9275.865 - 9326.277: 66.2457% ( 23) 00:07:27.809 9326.277 - 9376.689: 66.4027% ( 22) 00:07:27.809 9376.689 - 9427.102: 66.5596% ( 22) 00:07:27.809 9427.102 - 9477.514: 66.6952% ( 19) 00:07:27.809 9477.514 - 9527.926: 66.8379% ( 20) 00:07:27.809 9527.926 - 9578.338: 66.9735% ( 19) 00:07:27.809 9578.338 - 9628.751: 67.0947% ( 17) 00:07:27.809 9628.751 - 9679.163: 67.2089% ( 16) 00:07:27.809 9679.163 - 9729.575: 67.3231% ( 16) 00:07:27.809 9729.575 - 9779.988: 67.4872% ( 23) 00:07:27.809 9779.988 - 9830.400: 67.6370% ( 21) 00:07:27.809 9830.400 - 9880.812: 67.7868% ( 21) 00:07:27.809 9880.812 - 9931.225: 67.9224% ( 19) 00:07:27.809 9931.225 - 9981.637: 68.0579% ( 19) 00:07:27.809 9981.637 - 10032.049: 68.2006% ( 20) 00:07:27.809 10032.049 - 10082.462: 68.3505% ( 21) 00:07:27.809 10082.462 - 10132.874: 68.5288% ( 25) 00:07:27.809 10132.874 - 10183.286: 68.6787% ( 21) 00:07:27.809 10183.286 - 10233.698: 68.8071% ( 18) 00:07:27.809 10233.698 - 10284.111: 68.9355% ( 18) 00:07:27.809 10284.111 - 10334.523: 69.0568% ( 17) 00:07:27.809 10334.523 - 10384.935: 69.1924% ( 19) 00:07:27.809 10384.935 - 10435.348: 69.3279% ( 19) 00:07:27.809 10435.348 - 10485.760: 69.4421% ( 16) 00:07:27.809 10485.760 - 10536.172: 69.5491% ( 15) 00:07:27.809 10536.172 - 10586.585: 69.6490% ( 14) 00:07:27.809 10586.585 - 10636.997: 69.7417% ( 13) 00:07:27.809 10636.997 - 10687.409: 69.8273% ( 12) 00:07:27.809 10687.409 - 10737.822: 69.8916% ( 9) 00:07:27.809 10737.822 - 10788.234: 69.9700% ( 11) 00:07:27.809 10788.234 - 10838.646: 70.0485% ( 11) 00:07:27.809 10838.646 - 10889.058: 70.1555% ( 15) 00:07:27.809 10889.058 - 10939.471: 70.2412% ( 12) 00:07:27.809 10939.471 - 10989.883: 70.3553% ( 16) 00:07:27.809 10989.883 - 11040.295: 70.4695% ( 16) 00:07:27.809 11040.295 - 11090.708: 70.5979% ( 18) 00:07:27.809 11090.708 - 11141.120: 70.7477% ( 21) 00:07:27.809 11141.120 - 11191.532: 70.8547% ( 15) 00:07:27.809 11191.532 - 11241.945: 71.0117% ( 22) 00:07:27.809 11241.945 - 11292.357: 71.1758% ( 23) 00:07:27.809 11292.357 - 11342.769: 71.3042% ( 18) 00:07:27.809 11342.769 - 11393.182: 71.4469% ( 20) 00:07:27.809 11393.182 - 11443.594: 71.6110% ( 23) 00:07:27.809 11443.594 - 11494.006: 71.7680% ( 22) 00:07:27.809 11494.006 - 11544.418: 71.9749% ( 29) 00:07:27.809 11544.418 - 11594.831: 72.2246% ( 35) 00:07:27.809 11594.831 - 11645.243: 72.4529% ( 32) 00:07:27.809 11645.243 - 11695.655: 72.6955% ( 34) 00:07:27.809 11695.655 - 11746.068: 72.9666% ( 38) 00:07:27.809 11746.068 - 11796.480: 73.2092% ( 34) 00:07:27.809 11796.480 - 11846.892: 73.4589% ( 35) 00:07:27.809 11846.892 - 11897.305: 73.6872% ( 32) 00:07:27.809 11897.305 - 11947.717: 73.9013% ( 30) 00:07:27.809 11947.717 - 11998.129: 74.1795% ( 39) 00:07:27.809 11998.129 - 12048.542: 74.4364% ( 36) 00:07:27.809 12048.542 - 12098.954: 74.6647% ( 32) 00:07:27.809 12098.954 - 12149.366: 74.8930% ( 32) 00:07:27.809 12149.366 - 12199.778: 75.1427% ( 35) 00:07:27.809 12199.778 - 12250.191: 75.3781% ( 33) 00:07:27.809 12250.191 - 12300.603: 75.7420% ( 51) 00:07:27.809 12300.603 - 12351.015: 76.0702% ( 46) 00:07:27.809 12351.015 - 12401.428: 76.4269% ( 50) 00:07:27.809 12401.428 - 12451.840: 76.7052% ( 39) 00:07:27.809 12451.840 - 12502.252: 76.9620% ( 36) 00:07:27.809 12502.252 - 12552.665: 77.2260% ( 37) 00:07:27.809 12552.665 - 12603.077: 77.4472% ( 31) 00:07:27.809 12603.077 - 12653.489: 77.6898% ( 34) 00:07:27.809 12653.489 - 12703.902: 77.9966% ( 43) 00:07:27.809 12703.902 - 12754.314: 78.3890% ( 55) 00:07:27.809 12754.314 - 12804.726: 78.8456% ( 64) 00:07:27.809 12804.726 - 12855.138: 79.2166% ( 52) 00:07:27.809 12855.138 - 12905.551: 79.4806% ( 37) 00:07:27.809 12905.551 - 13006.375: 80.0228% ( 76) 00:07:27.809 13006.375 - 13107.200: 80.5793% ( 78) 00:07:27.809 13107.200 - 13208.025: 81.2643% ( 96) 00:07:27.809 13208.025 - 13308.849: 81.8493% ( 82) 00:07:27.809 13308.849 - 13409.674: 82.3059% ( 64) 00:07:27.809 13409.674 - 13510.498: 82.7340% ( 60) 00:07:27.809 13510.498 - 13611.323: 83.2120% ( 67) 00:07:27.809 13611.323 - 13712.148: 83.6687% ( 64) 00:07:27.809 13712.148 - 13812.972: 84.1039% ( 61) 00:07:27.809 13812.972 - 13913.797: 84.4749% ( 52) 00:07:27.809 13913.797 - 14014.622: 84.8673% ( 55) 00:07:27.809 14014.622 - 14115.446: 85.1955% ( 46) 00:07:27.809 14115.446 - 14216.271: 85.6022% ( 57) 00:07:27.809 14216.271 - 14317.095: 85.9660% ( 51) 00:07:27.809 14317.095 - 14417.920: 86.2586% ( 41) 00:07:27.809 14417.920 - 14518.745: 86.5939% ( 47) 00:07:27.809 14518.745 - 14619.569: 87.0362% ( 62) 00:07:27.809 14619.569 - 14720.394: 87.4715% ( 61) 00:07:27.809 14720.394 - 14821.218: 87.9566% ( 68) 00:07:27.809 14821.218 - 14922.043: 88.3776% ( 59) 00:07:27.809 14922.043 - 15022.868: 88.8413% ( 65) 00:07:27.809 15022.868 - 15123.692: 89.2551% ( 58) 00:07:27.809 15123.692 - 15224.517: 89.7403% ( 68) 00:07:27.809 15224.517 - 15325.342: 90.1684% ( 60) 00:07:27.809 15325.342 - 15426.166: 90.5679% ( 56) 00:07:27.809 15426.166 - 15526.991: 90.9675% ( 56) 00:07:27.809 15526.991 - 15627.815: 91.3242% ( 50) 00:07:27.809 15627.815 - 15728.640: 91.7095% ( 54) 00:07:27.809 15728.640 - 15829.465: 92.0377% ( 46) 00:07:27.809 15829.465 - 15930.289: 92.3873% ( 49) 00:07:27.809 15930.289 - 16031.114: 92.6513% ( 37) 00:07:27.809 16031.114 - 16131.938: 92.8867% ( 33) 00:07:27.809 16131.938 - 16232.763: 93.1507% ( 37) 00:07:27.809 16232.763 - 16333.588: 93.4289% ( 39) 00:07:27.809 16333.588 - 16434.412: 93.7143% ( 40) 00:07:27.809 16434.412 - 16535.237: 93.9926% ( 39) 00:07:27.809 16535.237 - 16636.062: 94.2922% ( 42) 00:07:27.809 16636.062 - 16736.886: 94.5063% ( 30) 00:07:27.809 16736.886 - 16837.711: 94.6418% ( 19) 00:07:27.809 16837.711 - 16938.535: 94.7489% ( 15) 00:07:27.809 16938.535 - 17039.360: 94.8131% ( 9) 00:07:27.809 17039.360 - 17140.185: 94.9130% ( 14) 00:07:27.809 17140.185 - 17241.009: 95.0128% ( 14) 00:07:27.809 17241.009 - 17341.834: 95.1270% ( 16) 00:07:27.809 17341.834 - 17442.658: 95.3410% ( 30) 00:07:27.809 17442.658 - 17543.483: 95.5693% ( 32) 00:07:27.809 17543.483 - 17644.308: 95.7834% ( 30) 00:07:27.809 17644.308 - 17745.132: 96.0188% ( 33) 00:07:27.809 17745.132 - 17845.957: 96.2686% ( 35) 00:07:27.809 17845.957 - 17946.782: 96.4683% ( 28) 00:07:27.809 17946.782 - 18047.606: 96.6396% ( 24) 00:07:27.809 18047.606 - 18148.431: 96.8037% ( 23) 00:07:27.809 18148.431 - 18249.255: 96.9749% ( 24) 00:07:27.809 18249.255 - 18350.080: 97.1604% ( 26) 00:07:27.809 18350.080 - 18450.905: 97.3816% ( 31) 00:07:27.809 18450.905 - 18551.729: 97.5956% ( 30) 00:07:27.809 18551.729 - 18652.554: 97.7954% ( 28) 00:07:27.809 18652.554 - 18753.378: 97.9737% ( 25) 00:07:27.809 18753.378 - 18854.203: 98.1735% ( 28) 00:07:27.809 18854.203 - 18955.028: 98.3305% ( 22) 00:07:27.809 18955.028 - 19055.852: 98.4518% ( 17) 00:07:27.809 19055.852 - 19156.677: 98.5731% ( 17) 00:07:27.809 19156.677 - 19257.502: 98.7015% ( 18) 00:07:27.809 19257.502 - 19358.326: 98.8014% ( 14) 00:07:27.809 19358.326 - 19459.151: 98.8656% ( 9) 00:07:27.809 19459.151 - 19559.975: 98.9227% ( 8) 00:07:27.809 19559.975 - 19660.800: 98.9869% ( 9) 00:07:27.809 19660.800 - 19761.625: 99.0225% ( 5) 00:07:27.809 19761.625 - 19862.449: 99.0439% ( 3) 00:07:27.809 19862.449 - 19963.274: 99.0654% ( 3) 00:07:27.809 19963.274 - 20064.098: 99.0868% ( 3) 00:07:27.809 26416.049 - 26617.698: 99.1438% ( 8) 00:07:27.809 26617.698 - 26819.348: 99.2009% ( 8) 00:07:27.809 26819.348 - 27020.997: 99.2651% ( 9) 00:07:27.809 27020.997 - 27222.646: 99.3151% ( 7) 00:07:27.809 27222.646 - 27424.295: 99.3793% ( 9) 00:07:27.809 27424.295 - 27625.945: 99.4292% ( 7) 00:07:27.809 27625.945 - 27827.594: 99.4934% ( 9) 00:07:27.809 27827.594 - 28029.243: 99.5434% ( 7) 00:07:27.809 36296.862 - 36498.511: 99.5791% ( 5) 00:07:27.809 36498.511 - 36700.160: 99.6361% ( 8) 00:07:27.809 36700.160 - 36901.809: 99.6789% ( 6) 00:07:27.809 36901.809 - 37103.458: 99.7360% ( 8) 00:07:27.809 37103.458 - 37305.108: 99.7860% ( 7) 00:07:27.809 37305.108 - 37506.757: 99.8430% ( 8) 00:07:27.809 37506.757 - 37708.406: 99.9001% ( 8) 00:07:27.809 37708.406 - 37910.055: 99.9501% ( 7) 00:07:27.809 37910.055 - 38111.705: 100.0000% ( 7) 00:07:27.809 00:07:27.809 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.809 ============================================================================== 00:07:27.809 Range in us Cumulative IO count 00:07:27.809 5570.560 - 5595.766: 0.0213% ( 3) 00:07:27.809 5595.766 - 5620.972: 0.1278% ( 15) 00:07:27.809 5620.972 - 5646.178: 0.3977% ( 38) 00:07:27.809 5646.178 - 5671.385: 0.7670% ( 52) 00:07:27.809 5671.385 - 5696.591: 1.5554% ( 111) 00:07:27.809 5696.591 - 5721.797: 2.5000% ( 133) 00:07:27.809 5721.797 - 5747.003: 3.7003% ( 169) 00:07:27.809 5747.003 - 5772.209: 4.9432% ( 175) 00:07:27.809 5772.209 - 5797.415: 6.1222% ( 166) 00:07:27.809 5797.415 - 5822.622: 7.4503% ( 187) 00:07:27.809 5822.622 - 5847.828: 8.9844% ( 216) 00:07:27.809 5847.828 - 5873.034: 10.4119% ( 201) 00:07:27.809 5873.034 - 5898.240: 11.8253% ( 199) 00:07:27.809 5898.240 - 5923.446: 13.3736% ( 218) 00:07:27.809 5923.446 - 5948.652: 14.9219% ( 218) 00:07:27.809 5948.652 - 5973.858: 16.5483% ( 229) 00:07:27.809 5973.858 - 5999.065: 18.2173% ( 235) 00:07:27.809 5999.065 - 6024.271: 19.8295% ( 227) 00:07:27.809 6024.271 - 6049.477: 21.4560% ( 229) 00:07:27.809 6049.477 - 6074.683: 23.1321% ( 236) 00:07:27.809 6074.683 - 6099.889: 24.7798% ( 232) 00:07:27.809 6099.889 - 6125.095: 26.3920% ( 227) 00:07:27.810 6125.095 - 6150.302: 28.0398% ( 232) 00:07:27.810 6150.302 - 6175.508: 29.7301% ( 238) 00:07:27.810 6175.508 - 6200.714: 31.3707% ( 231) 00:07:27.810 6200.714 - 6225.920: 33.0469% ( 236) 00:07:27.810 6225.920 - 6251.126: 34.7159% ( 235) 00:07:27.810 6251.126 - 6276.332: 36.4418% ( 243) 00:07:27.810 6276.332 - 6301.538: 38.0895% ( 232) 00:07:27.810 6301.538 - 6326.745: 39.7301% ( 231) 00:07:27.810 6326.745 - 6351.951: 41.3565% ( 229) 00:07:27.810 6351.951 - 6377.157: 43.0682% ( 241) 00:07:27.810 6377.157 - 6402.363: 44.7656% ( 239) 00:07:27.810 6402.363 - 6427.569: 46.3991% ( 230) 00:07:27.810 6427.569 - 6452.775: 47.9901% ( 224) 00:07:27.810 6452.775 - 6503.188: 50.6605% ( 376) 00:07:27.810 6503.188 - 6553.600: 52.5355% ( 264) 00:07:27.810 6553.600 - 6604.012: 53.8139% ( 180) 00:07:27.810 6604.012 - 6654.425: 54.7585% ( 133) 00:07:27.810 6654.425 - 6704.837: 55.4119% ( 92) 00:07:27.810 6704.837 - 6755.249: 55.9304% ( 73) 00:07:27.810 6755.249 - 6805.662: 56.2997% ( 52) 00:07:27.810 6805.662 - 6856.074: 56.6193% ( 45) 00:07:27.810 6856.074 - 6906.486: 56.8679% ( 35) 00:07:27.810 6906.486 - 6956.898: 57.1307% ( 37) 00:07:27.810 6956.898 - 7007.311: 57.3651% ( 33) 00:07:27.810 7007.311 - 7057.723: 57.5923% ( 32) 00:07:27.810 7057.723 - 7108.135: 57.8409% ( 35) 00:07:27.810 7108.135 - 7158.548: 58.0824% ( 34) 00:07:27.810 7158.548 - 7208.960: 58.3168% ( 33) 00:07:27.810 7208.960 - 7259.372: 58.5582% ( 34) 00:07:27.810 7259.372 - 7309.785: 58.7926% ( 33) 00:07:27.810 7309.785 - 7360.197: 59.0128% ( 31) 00:07:27.810 7360.197 - 7410.609: 59.2401% ( 32) 00:07:27.810 7410.609 - 7461.022: 59.4531% ( 30) 00:07:27.810 7461.022 - 7511.434: 59.6875% ( 33) 00:07:27.810 7511.434 - 7561.846: 59.9503% ( 37) 00:07:27.810 7561.846 - 7612.258: 60.1989% ( 35) 00:07:27.810 7612.258 - 7662.671: 60.4332% ( 33) 00:07:27.810 7662.671 - 7713.083: 60.6747% ( 34) 00:07:27.810 7713.083 - 7763.495: 60.8807% ( 29) 00:07:27.810 7763.495 - 7813.908: 61.0724% ( 27) 00:07:27.810 7813.908 - 7864.320: 61.2287% ( 22) 00:07:27.810 7864.320 - 7914.732: 61.3423% ( 16) 00:07:27.810 7914.732 - 7965.145: 61.4631% ( 17) 00:07:27.810 7965.145 - 8015.557: 61.5767% ( 16) 00:07:27.810 8015.557 - 8065.969: 61.6761% ( 14) 00:07:27.810 8065.969 - 8116.382: 61.7685% ( 13) 00:07:27.810 8116.382 - 8166.794: 61.8963% ( 18) 00:07:27.810 8166.794 - 8217.206: 62.0312% ( 19) 00:07:27.810 8217.206 - 8267.618: 62.1733% ( 20) 00:07:27.810 8267.618 - 8318.031: 62.3366% ( 23) 00:07:27.810 8318.031 - 8368.443: 62.4432% ( 15) 00:07:27.810 8368.443 - 8418.855: 62.6065% ( 23) 00:07:27.810 8418.855 - 8469.268: 62.7486% ( 20) 00:07:27.810 8469.268 - 8519.680: 62.8693% ( 17) 00:07:27.810 8519.680 - 8570.092: 63.0327% ( 23) 00:07:27.810 8570.092 - 8620.505: 63.1960% ( 23) 00:07:27.810 8620.505 - 8670.917: 63.3452% ( 21) 00:07:27.810 8670.917 - 8721.329: 63.5085% ( 23) 00:07:27.810 8721.329 - 8771.742: 63.6577% ( 21) 00:07:27.810 8771.742 - 8822.154: 63.8494% ( 27) 00:07:27.810 8822.154 - 8872.566: 64.0554% ( 29) 00:07:27.810 8872.566 - 8922.978: 64.2614% ( 29) 00:07:27.810 8922.978 - 8973.391: 64.4744% ( 30) 00:07:27.810 8973.391 - 9023.803: 64.6946% ( 31) 00:07:27.810 9023.803 - 9074.215: 64.9219% ( 32) 00:07:27.810 9074.215 - 9124.628: 65.1349% ( 30) 00:07:27.810 9124.628 - 9175.040: 65.3409% ( 29) 00:07:27.810 9175.040 - 9225.452: 65.5398% ( 28) 00:07:27.810 9225.452 - 9275.865: 65.7315% ( 27) 00:07:27.810 9275.865 - 9326.277: 65.9304% ( 28) 00:07:27.810 9326.277 - 9376.689: 66.1364% ( 29) 00:07:27.810 9376.689 - 9427.102: 66.3352% ( 28) 00:07:27.810 9427.102 - 9477.514: 66.5199% ( 26) 00:07:27.810 9477.514 - 9527.926: 66.7116% ( 27) 00:07:27.810 9527.926 - 9578.338: 66.8821% ( 24) 00:07:27.810 9578.338 - 9628.751: 67.0241% ( 20) 00:07:27.810 9628.751 - 9679.163: 67.1662% ( 20) 00:07:27.810 9679.163 - 9729.575: 67.2869% ( 17) 00:07:27.810 9729.575 - 9779.988: 67.4432% ( 22) 00:07:27.810 9779.988 - 9830.400: 67.5994% ( 22) 00:07:27.810 9830.400 - 9880.812: 67.7486% ( 21) 00:07:27.810 9880.812 - 9931.225: 67.8835% ( 19) 00:07:27.810 9931.225 - 9981.637: 68.0327% ( 21) 00:07:27.810 9981.637 - 10032.049: 68.1605% ( 18) 00:07:27.810 10032.049 - 10082.462: 68.2812% ( 17) 00:07:27.810 10082.462 - 10132.874: 68.3736% ( 13) 00:07:27.810 10132.874 - 10183.286: 68.4801% ( 15) 00:07:27.810 10183.286 - 10233.698: 68.6080% ( 18) 00:07:27.810 10233.698 - 10284.111: 68.7216% ( 16) 00:07:27.810 10284.111 - 10334.523: 68.8352% ( 16) 00:07:27.810 10334.523 - 10384.935: 68.9347% ( 14) 00:07:27.810 10384.935 - 10435.348: 69.0270% ( 13) 00:07:27.810 10435.348 - 10485.760: 69.1264% ( 14) 00:07:27.810 10485.760 - 10536.172: 69.2188% ( 13) 00:07:27.810 10536.172 - 10586.585: 69.3111% ( 13) 00:07:27.810 10586.585 - 10636.997: 69.3963% ( 12) 00:07:27.810 10636.997 - 10687.409: 69.4602% ( 9) 00:07:27.810 10687.409 - 10737.822: 69.5312% ( 10) 00:07:27.810 10737.822 - 10788.234: 69.6023% ( 10) 00:07:27.810 10788.234 - 10838.646: 69.6662% ( 9) 00:07:27.810 10838.646 - 10889.058: 69.7727% ( 15) 00:07:27.810 10889.058 - 10939.471: 69.8935% ( 17) 00:07:27.810 10939.471 - 10989.883: 70.0142% ( 17) 00:07:27.810 10989.883 - 11040.295: 70.1420% ( 18) 00:07:27.810 11040.295 - 11090.708: 70.2770% ( 19) 00:07:27.810 11090.708 - 11141.120: 70.4688% ( 27) 00:07:27.810 11141.120 - 11191.532: 70.6392% ( 24) 00:07:27.810 11191.532 - 11241.945: 70.8239% ( 26) 00:07:27.810 11241.945 - 11292.357: 71.0369% ( 30) 00:07:27.810 11292.357 - 11342.769: 71.2287% ( 27) 00:07:27.810 11342.769 - 11393.182: 71.4276% ( 28) 00:07:27.810 11393.182 - 11443.594: 71.5980% ( 24) 00:07:27.810 11443.594 - 11494.006: 71.7898% ( 27) 00:07:27.810 11494.006 - 11544.418: 72.0099% ( 31) 00:07:27.810 11544.418 - 11594.831: 72.2727% ( 37) 00:07:27.810 11594.831 - 11645.243: 72.5426% ( 38) 00:07:27.810 11645.243 - 11695.655: 72.7912% ( 35) 00:07:27.810 11695.655 - 11746.068: 73.0398% ( 35) 00:07:27.810 11746.068 - 11796.480: 73.3026% ( 37) 00:07:27.810 11796.480 - 11846.892: 73.5369% ( 33) 00:07:27.810 11846.892 - 11897.305: 73.7358% ( 28) 00:07:27.810 11897.305 - 11947.717: 73.9560% ( 31) 00:07:27.810 11947.717 - 11998.129: 74.1690% ( 30) 00:07:27.810 11998.129 - 12048.542: 74.3821% ( 30) 00:07:27.810 12048.542 - 12098.954: 74.5810% ( 28) 00:07:27.810 12098.954 - 12149.366: 74.7585% ( 25) 00:07:27.810 12149.366 - 12199.778: 74.9574% ( 28) 00:07:27.810 12199.778 - 12250.191: 75.1349% ( 25) 00:07:27.810 12250.191 - 12300.603: 75.3409% ( 29) 00:07:27.810 12300.603 - 12351.015: 75.5895% ( 35) 00:07:27.810 12351.015 - 12401.428: 75.8594% ( 38) 00:07:27.810 12401.428 - 12451.840: 76.1932% ( 47) 00:07:27.810 12451.840 - 12502.252: 76.4844% ( 41) 00:07:27.810 12502.252 - 12552.665: 76.7330% ( 35) 00:07:27.810 12552.665 - 12603.077: 76.9957% ( 37) 00:07:27.810 12603.077 - 12653.489: 77.2585% ( 37) 00:07:27.810 12653.489 - 12703.902: 77.5852% ( 46) 00:07:27.810 12703.902 - 12754.314: 77.8622% ( 39) 00:07:27.810 12754.314 - 12804.726: 78.1179% ( 36) 00:07:27.810 12804.726 - 12855.138: 78.3452% ( 32) 00:07:27.810 12855.138 - 12905.551: 78.6861% ( 48) 00:07:27.810 12905.551 - 13006.375: 79.4105% ( 102) 00:07:27.810 13006.375 - 13107.200: 80.0923% ( 96) 00:07:27.810 13107.200 - 13208.025: 80.6605% ( 80) 00:07:27.810 13208.025 - 13308.849: 81.2358% ( 81) 00:07:27.810 13308.849 - 13409.674: 81.7898% ( 78) 00:07:27.810 13409.674 - 13510.498: 82.3580% ( 80) 00:07:27.810 13510.498 - 13611.323: 82.8054% ( 63) 00:07:27.810 13611.323 - 13712.148: 83.2955% ( 69) 00:07:27.810 13712.148 - 13812.972: 83.6861% ( 55) 00:07:27.810 13812.972 - 13913.797: 84.1619% ( 67) 00:07:27.810 13913.797 - 14014.622: 84.7301% ( 80) 00:07:27.810 14014.622 - 14115.446: 85.2344% ( 71) 00:07:27.810 14115.446 - 14216.271: 85.6747% ( 62) 00:07:27.810 14216.271 - 14317.095: 86.0653% ( 55) 00:07:27.810 14317.095 - 14417.920: 86.5554% ( 69) 00:07:27.810 14417.920 - 14518.745: 86.9602% ( 57) 00:07:27.810 14518.745 - 14619.569: 87.3224% ( 51) 00:07:27.810 14619.569 - 14720.394: 87.6918% ( 52) 00:07:27.810 14720.394 - 14821.218: 88.0824% ( 55) 00:07:27.810 14821.218 - 14922.043: 88.4233% ( 48) 00:07:27.810 14922.043 - 15022.868: 88.7500% ( 46) 00:07:27.810 15022.868 - 15123.692: 89.1264% ( 53) 00:07:27.810 15123.692 - 15224.517: 89.4034% ( 39) 00:07:27.810 15224.517 - 15325.342: 89.6236% ( 31) 00:07:27.810 15325.342 - 15426.166: 89.8864% ( 37) 00:07:27.810 15426.166 - 15526.991: 90.2060% ( 45) 00:07:27.810 15526.991 - 15627.815: 90.6037% ( 56) 00:07:27.810 15627.815 - 15728.640: 90.9659% ( 51) 00:07:27.810 15728.640 - 15829.465: 91.3778% ( 58) 00:07:27.810 15829.465 - 15930.289: 91.7827% ( 57) 00:07:27.810 15930.289 - 16031.114: 92.2159% ( 61) 00:07:27.810 16031.114 - 16131.938: 92.5781% ( 51) 00:07:27.810 16131.938 - 16232.763: 92.9688% ( 55) 00:07:27.810 16232.763 - 16333.588: 93.3452% ( 53) 00:07:27.810 16333.588 - 16434.412: 93.6648% ( 45) 00:07:27.810 16434.412 - 16535.237: 93.9773% ( 44) 00:07:27.810 16535.237 - 16636.062: 94.1690% ( 27) 00:07:27.810 16636.062 - 16736.886: 94.3182% ( 21) 00:07:27.810 16736.886 - 16837.711: 94.4531% ( 19) 00:07:27.810 16837.711 - 16938.535: 94.5952% ( 20) 00:07:27.810 16938.535 - 17039.360: 94.6733% ( 11) 00:07:27.810 17039.360 - 17140.185: 94.7514% ( 11) 00:07:27.810 17140.185 - 17241.009: 94.8438% ( 13) 00:07:27.810 17241.009 - 17341.834: 94.9716% ( 18) 00:07:27.810 17341.834 - 17442.658: 95.1207% ( 21) 00:07:27.810 17442.658 - 17543.483: 95.2557% ( 19) 00:07:27.810 17543.483 - 17644.308: 95.4545% ( 28) 00:07:27.810 17644.308 - 17745.132: 95.6605% ( 29) 00:07:27.811 17745.132 - 17845.957: 95.9801% ( 45) 00:07:27.811 17845.957 - 17946.782: 96.3068% ( 46) 00:07:27.811 17946.782 - 18047.606: 96.6051% ( 42) 00:07:27.811 18047.606 - 18148.431: 96.8537% ( 35) 00:07:27.811 18148.431 - 18249.255: 97.0881% ( 33) 00:07:27.811 18249.255 - 18350.080: 97.3722% ( 40) 00:07:27.811 18350.080 - 18450.905: 97.6989% ( 46) 00:07:27.811 18450.905 - 18551.729: 97.9616% ( 37) 00:07:27.811 18551.729 - 18652.554: 98.2102% ( 35) 00:07:27.811 18652.554 - 18753.378: 98.4730% ( 37) 00:07:27.811 18753.378 - 18854.203: 98.7003% ( 32) 00:07:27.811 18854.203 - 18955.028: 98.9205% ( 31) 00:07:27.811 18955.028 - 19055.852: 99.0980% ( 25) 00:07:27.811 19055.852 - 19156.677: 99.2543% ( 22) 00:07:27.811 19156.677 - 19257.502: 99.3608% ( 15) 00:07:27.811 19257.502 - 19358.326: 99.4460% ( 12) 00:07:27.811 19358.326 - 19459.151: 99.5099% ( 9) 00:07:27.811 19459.151 - 19559.975: 99.5455% ( 5) 00:07:27.811 26617.698 - 26819.348: 99.5810% ( 5) 00:07:27.811 26819.348 - 27020.997: 99.6378% ( 8) 00:07:27.811 27020.997 - 27222.646: 99.7017% ( 9) 00:07:27.811 27222.646 - 27424.295: 99.7585% ( 8) 00:07:27.811 27424.295 - 27625.945: 99.8153% ( 8) 00:07:27.811 27625.945 - 27827.594: 99.8793% ( 9) 00:07:27.811 27827.594 - 28029.243: 99.9361% ( 8) 00:07:27.811 28029.243 - 28230.892: 100.0000% ( 9) 00:07:27.811 00:07:27.811 07:39:17 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:28.746 Initializing NVMe Controllers 00:07:28.746 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:28.746 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:28.746 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:28.746 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:28.746 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:28.746 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:28.746 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:28.746 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:28.746 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:28.746 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:28.746 Initialization complete. Launching workers. 00:07:28.746 ======================================================== 00:07:28.746 Latency(us) 00:07:28.746 Device Information : IOPS MiB/s Average min max 00:07:28.746 PCIE (0000:00:11.0) NSID 1 from core 0: 16123.42 188.95 7949.02 6158.61 37187.25 00:07:28.746 PCIE (0000:00:13.0) NSID 1 from core 0: 16123.42 188.95 7936.97 6105.96 36144.34 00:07:28.746 PCIE (0000:00:10.0) NSID 1 from core 0: 16123.42 188.95 7923.47 5848.49 34441.23 00:07:28.746 PCIE (0000:00:12.0) NSID 1 from core 0: 16123.42 188.95 7910.28 6087.62 32832.94 00:07:28.746 PCIE (0000:00:12.0) NSID 2 from core 0: 16123.42 188.95 7897.79 6129.58 32453.42 00:07:28.746 PCIE (0000:00:12.0) NSID 3 from core 0: 16187.40 189.70 7854.06 6136.07 24685.85 00:07:28.746 ======================================================== 00:07:28.746 Total : 96804.51 1134.43 7911.89 5848.49 37187.25 00:07:28.746 00:07:28.746 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:28.746 ================================================================================= 00:07:28.746 1.00000% : 6503.188us 00:07:28.746 10.00000% : 6805.662us 00:07:28.746 25.00000% : 7007.311us 00:07:28.746 50.00000% : 7309.785us 00:07:28.746 75.00000% : 7813.908us 00:07:28.746 90.00000% : 9225.452us 00:07:28.746 95.00000% : 11846.892us 00:07:28.746 98.00000% : 14821.218us 00:07:28.746 99.00000% : 15526.991us 00:07:28.746 99.50000% : 29440.788us 00:07:28.746 99.90000% : 36901.809us 00:07:28.746 99.99000% : 37305.108us 00:07:28.746 99.99900% : 37305.108us 00:07:28.746 99.99990% : 37305.108us 00:07:28.746 99.99999% : 37305.108us 00:07:28.746 00:07:28.746 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:28.746 ================================================================================= 00:07:28.746 1.00000% : 6503.188us 00:07:28.746 10.00000% : 6755.249us 00:07:28.746 25.00000% : 7007.311us 00:07:28.746 50.00000% : 7309.785us 00:07:28.746 75.00000% : 7813.908us 00:07:28.746 90.00000% : 9275.865us 00:07:28.746 95.00000% : 11746.068us 00:07:28.746 98.00000% : 14720.394us 00:07:28.746 99.00000% : 15829.465us 00:07:28.746 99.50000% : 28230.892us 00:07:28.746 99.90000% : 35893.563us 00:07:28.747 99.99000% : 36296.862us 00:07:28.747 99.99900% : 36296.862us 00:07:28.747 99.99990% : 36296.862us 00:07:28.747 99.99999% : 36296.862us 00:07:28.747 00:07:28.747 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:28.747 ================================================================================= 00:07:28.747 1.00000% : 6377.157us 00:07:28.747 10.00000% : 6704.837us 00:07:28.747 25.00000% : 6956.898us 00:07:28.747 50.00000% : 7309.785us 00:07:28.747 75.00000% : 7864.320us 00:07:28.747 90.00000% : 9326.277us 00:07:28.747 95.00000% : 11746.068us 00:07:28.747 98.00000% : 14619.569us 00:07:28.747 99.00000% : 16131.938us 00:07:28.747 99.50000% : 27222.646us 00:07:28.747 99.90000% : 34078.720us 00:07:28.747 99.99000% : 34482.018us 00:07:28.747 99.99900% : 34482.018us 00:07:28.747 99.99990% : 34482.018us 00:07:28.747 99.99999% : 34482.018us 00:07:28.747 00:07:28.747 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:28.747 ================================================================================= 00:07:28.747 1.00000% : 6452.775us 00:07:28.747 10.00000% : 6755.249us 00:07:28.747 25.00000% : 7007.311us 00:07:28.747 50.00000% : 7309.785us 00:07:28.747 75.00000% : 7813.908us 00:07:28.747 90.00000% : 9326.277us 00:07:28.747 95.00000% : 11796.480us 00:07:28.747 98.00000% : 14619.569us 00:07:28.747 99.00000% : 16434.412us 00:07:28.747 99.50000% : 25710.277us 00:07:28.747 99.90000% : 32465.526us 00:07:28.747 99.99000% : 32868.825us 00:07:28.747 99.99900% : 32868.825us 00:07:28.747 99.99990% : 32868.825us 00:07:28.747 99.99999% : 32868.825us 00:07:28.747 00:07:28.747 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:28.747 ================================================================================= 00:07:28.747 1.00000% : 6553.600us 00:07:28.747 10.00000% : 6805.662us 00:07:28.747 25.00000% : 6956.898us 00:07:28.747 50.00000% : 7309.785us 00:07:28.747 75.00000% : 7813.908us 00:07:28.747 90.00000% : 9326.277us 00:07:28.747 95.00000% : 11746.068us 00:07:28.747 98.00000% : 14821.218us 00:07:28.747 99.00000% : 16031.114us 00:07:28.747 99.50000% : 25004.505us 00:07:28.747 99.90000% : 32062.228us 00:07:28.747 99.99000% : 32465.526us 00:07:28.747 99.99900% : 32465.526us 00:07:28.747 99.99990% : 32465.526us 00:07:28.747 99.99999% : 32465.526us 00:07:28.747 00:07:28.747 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:28.747 ================================================================================= 00:07:28.747 1.00000% : 6553.600us 00:07:28.747 10.00000% : 6805.662us 00:07:28.747 25.00000% : 7007.311us 00:07:28.747 50.00000% : 7309.785us 00:07:28.747 75.00000% : 7864.320us 00:07:28.747 90.00000% : 9275.865us 00:07:28.747 95.00000% : 11746.068us 00:07:28.747 98.00000% : 14720.394us 00:07:28.747 99.00000% : 15123.692us 00:07:28.747 99.50000% : 16535.237us 00:07:28.747 99.90000% : 24298.732us 00:07:28.747 99.99000% : 24702.031us 00:07:28.747 99.99900% : 24702.031us 00:07:28.747 99.99990% : 24702.031us 00:07:28.747 99.99999% : 24702.031us 00:07:28.747 00:07:28.747 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:28.747 ============================================================================== 00:07:28.747 Range in us Cumulative IO count 00:07:28.747 6150.302 - 6175.508: 0.0124% ( 2) 00:07:28.747 6175.508 - 6200.714: 0.0186% ( 1) 00:07:28.747 6200.714 - 6225.920: 0.0372% ( 3) 00:07:28.747 6225.920 - 6251.126: 0.0682% ( 5) 00:07:28.747 6251.126 - 6276.332: 0.0806% ( 2) 00:07:28.747 6276.332 - 6301.538: 0.0930% ( 2) 00:07:28.747 6301.538 - 6326.745: 0.1240% ( 5) 00:07:28.747 6326.745 - 6351.951: 0.1984% ( 12) 00:07:28.747 6351.951 - 6377.157: 0.3472% ( 24) 00:07:28.747 6377.157 - 6402.363: 0.4092% ( 10) 00:07:28.747 6402.363 - 6427.569: 0.4960% ( 14) 00:07:28.747 6427.569 - 6452.775: 0.6262% ( 21) 00:07:28.747 6452.775 - 6503.188: 1.0479% ( 68) 00:07:28.747 6503.188 - 6553.600: 1.9469% ( 145) 00:07:28.747 6553.600 - 6604.012: 2.9204% ( 157) 00:07:28.747 6604.012 - 6654.425: 4.4147% ( 241) 00:07:28.747 6654.425 - 6704.837: 6.8452% ( 392) 00:07:28.747 6704.837 - 6755.249: 9.4866% ( 426) 00:07:28.747 6755.249 - 6805.662: 12.8782% ( 547) 00:07:28.747 6805.662 - 6856.074: 16.9519% ( 657) 00:07:28.747 6856.074 - 6906.486: 20.2071% ( 525) 00:07:28.747 6906.486 - 6956.898: 23.9831% ( 609) 00:07:28.747 6956.898 - 7007.311: 28.0196% ( 651) 00:07:28.747 7007.311 - 7057.723: 33.0419% ( 810) 00:07:28.747 7057.723 - 7108.135: 36.9854% ( 636) 00:07:28.747 7108.135 - 7158.548: 40.3522% ( 543) 00:07:28.747 7158.548 - 7208.960: 43.8058% ( 557) 00:07:28.747 7208.960 - 7259.372: 47.7431% ( 635) 00:07:28.747 7259.372 - 7309.785: 52.1763% ( 715) 00:07:28.747 7309.785 - 7360.197: 55.5370% ( 542) 00:07:28.747 7360.197 - 7410.609: 58.5565% ( 487) 00:07:28.747 7410.609 - 7461.022: 61.5079% ( 476) 00:07:28.747 7461.022 - 7511.434: 64.2237% ( 438) 00:07:28.747 7511.434 - 7561.846: 66.3194% ( 338) 00:07:28.747 7561.846 - 7612.258: 68.7624% ( 394) 00:07:28.747 7612.258 - 7662.671: 70.8643% ( 339) 00:07:28.747 7662.671 - 7713.083: 72.4206% ( 251) 00:07:28.747 7713.083 - 7763.495: 73.8963% ( 238) 00:07:28.747 7763.495 - 7813.908: 75.0868% ( 192) 00:07:28.747 7813.908 - 7864.320: 76.2401% ( 186) 00:07:28.747 7864.320 - 7914.732: 76.9841% ( 120) 00:07:28.747 7914.732 - 7965.145: 77.8832% ( 145) 00:07:28.747 7965.145 - 8015.557: 78.8132% ( 150) 00:07:28.747 8015.557 - 8065.969: 79.4829% ( 108) 00:07:28.747 8065.969 - 8116.382: 80.3261% ( 136) 00:07:28.747 8116.382 - 8166.794: 81.0082% ( 110) 00:07:28.747 8166.794 - 8217.206: 81.8514% ( 136) 00:07:28.747 8217.206 - 8267.618: 82.8807% ( 166) 00:07:28.747 8267.618 - 8318.031: 83.4635% ( 94) 00:07:28.747 8318.031 - 8368.443: 83.9038% ( 71) 00:07:28.747 8368.443 - 8418.855: 84.4680% ( 91) 00:07:28.747 8418.855 - 8469.268: 85.0136% ( 88) 00:07:28.747 8469.268 - 8519.680: 85.4353% ( 68) 00:07:28.747 8519.680 - 8570.092: 85.8755% ( 71) 00:07:28.747 8570.092 - 8620.505: 86.2971% ( 68) 00:07:28.747 8620.505 - 8670.917: 86.5761% ( 45) 00:07:28.747 8670.917 - 8721.329: 86.8180% ( 39) 00:07:28.747 8721.329 - 8771.742: 87.0970% ( 45) 00:07:28.747 8771.742 - 8822.154: 87.3822% ( 46) 00:07:28.747 8822.154 - 8872.566: 87.6612% ( 45) 00:07:28.747 8872.566 - 8922.978: 88.0704% ( 66) 00:07:28.747 8922.978 - 8973.391: 88.4983% ( 69) 00:07:28.747 8973.391 - 9023.803: 88.8145% ( 51) 00:07:28.747 9023.803 - 9074.215: 89.1803% ( 59) 00:07:28.747 9074.215 - 9124.628: 89.5089% ( 53) 00:07:28.747 9124.628 - 9175.040: 89.9182% ( 66) 00:07:28.747 9175.040 - 9225.452: 90.0608% ( 23) 00:07:28.747 9225.452 - 9275.865: 90.4204% ( 58) 00:07:28.747 9275.865 - 9326.277: 90.6498% ( 37) 00:07:28.747 9326.277 - 9376.689: 90.8172% ( 27) 00:07:28.747 9376.689 - 9427.102: 90.9474% ( 21) 00:07:28.747 9427.102 - 9477.514: 91.0838% ( 22) 00:07:28.747 9477.514 - 9527.926: 91.2450% ( 26) 00:07:28.747 9527.926 - 9578.338: 91.3814% ( 22) 00:07:28.747 9578.338 - 9628.751: 91.4869% ( 17) 00:07:28.747 9628.751 - 9679.163: 91.5923% ( 17) 00:07:28.747 9679.163 - 9729.575: 91.6915% ( 16) 00:07:28.747 9729.575 - 9779.988: 91.9023% ( 34) 00:07:28.747 9779.988 - 9830.400: 92.0697% ( 27) 00:07:28.747 9830.400 - 9880.812: 92.2991% ( 37) 00:07:28.747 9880.812 - 9931.225: 92.4231% ( 20) 00:07:28.747 9931.225 - 9981.637: 92.5347% ( 18) 00:07:28.747 9981.637 - 10032.049: 92.6215% ( 14) 00:07:28.747 10032.049 - 10082.462: 92.7083% ( 14) 00:07:28.747 10082.462 - 10132.874: 92.7703% ( 10) 00:07:28.747 10132.874 - 10183.286: 92.8137% ( 7) 00:07:28.747 10183.286 - 10233.698: 92.8633% ( 8) 00:07:28.747 10233.698 - 10284.111: 92.9129% ( 8) 00:07:28.747 10284.111 - 10334.523: 92.9936% ( 13) 00:07:28.747 10334.523 - 10384.935: 93.0494% ( 9) 00:07:28.747 10384.935 - 10435.348: 93.0804% ( 5) 00:07:28.747 10435.348 - 10485.760: 93.1114% ( 5) 00:07:28.747 10485.760 - 10536.172: 93.1486% ( 6) 00:07:28.747 10536.172 - 10586.585: 93.2540% ( 17) 00:07:28.747 10586.585 - 10636.997: 93.3780% ( 20) 00:07:28.747 10636.997 - 10687.409: 93.4524% ( 12) 00:07:28.747 10687.409 - 10737.822: 93.5454% ( 15) 00:07:28.747 10737.822 - 10788.234: 93.6260% ( 13) 00:07:28.747 10788.234 - 10838.646: 93.7252% ( 16) 00:07:28.747 10838.646 - 10889.058: 93.9546% ( 37) 00:07:28.747 10889.058 - 10939.471: 94.0352% ( 13) 00:07:28.747 10939.471 - 10989.883: 94.1096% ( 12) 00:07:28.747 10989.883 - 11040.295: 94.1840% ( 12) 00:07:28.747 11040.295 - 11090.708: 94.2522% ( 11) 00:07:28.747 11090.708 - 11141.120: 94.3142% ( 10) 00:07:28.747 11141.120 - 11191.532: 94.3638% ( 8) 00:07:28.747 11191.532 - 11241.945: 94.4134% ( 8) 00:07:28.747 11241.945 - 11292.357: 94.4816% ( 11) 00:07:28.747 11292.357 - 11342.769: 94.5623% ( 13) 00:07:28.747 11342.769 - 11393.182: 94.6367% ( 12) 00:07:28.747 11393.182 - 11443.594: 94.6987% ( 10) 00:07:28.747 11443.594 - 11494.006: 94.7235% ( 4) 00:07:28.747 11494.006 - 11544.418: 94.7483% ( 4) 00:07:28.747 11544.418 - 11594.831: 94.7793% ( 5) 00:07:28.747 11594.831 - 11645.243: 94.8289% ( 8) 00:07:28.747 11645.243 - 11695.655: 94.8599% ( 5) 00:07:28.747 11695.655 - 11746.068: 94.8909% ( 5) 00:07:28.747 11746.068 - 11796.480: 94.9405% ( 8) 00:07:28.747 11796.480 - 11846.892: 95.1079% ( 27) 00:07:28.747 11846.892 - 11897.305: 95.1327% ( 4) 00:07:28.747 11897.305 - 11947.717: 95.1513% ( 3) 00:07:28.747 11947.717 - 11998.129: 95.1761% ( 4) 00:07:28.747 11998.129 - 12048.542: 95.2133% ( 6) 00:07:28.748 12048.542 - 12098.954: 95.2815% ( 11) 00:07:28.748 12098.954 - 12149.366: 95.3373% ( 9) 00:07:28.748 12149.366 - 12199.778: 95.3931% ( 9) 00:07:28.748 12199.778 - 12250.191: 95.4799% ( 14) 00:07:28.748 12250.191 - 12300.603: 95.5543% ( 12) 00:07:28.748 12300.603 - 12351.015: 95.6225% ( 11) 00:07:28.748 12351.015 - 12401.428: 95.7279% ( 17) 00:07:28.748 12401.428 - 12451.840: 95.8085% ( 13) 00:07:28.748 12451.840 - 12502.252: 95.8891% ( 13) 00:07:28.748 12502.252 - 12552.665: 96.0131% ( 20) 00:07:28.748 12552.665 - 12603.077: 96.0938% ( 13) 00:07:28.748 12603.077 - 12653.489: 96.1558% ( 10) 00:07:28.748 12653.489 - 12703.902: 96.2364% ( 13) 00:07:28.748 12703.902 - 12754.314: 96.2922% ( 9) 00:07:28.748 12754.314 - 12804.726: 96.3356% ( 7) 00:07:28.748 12804.726 - 12855.138: 96.3790% ( 7) 00:07:28.748 12855.138 - 12905.551: 96.4162% ( 6) 00:07:28.748 12905.551 - 13006.375: 96.4906% ( 12) 00:07:28.748 13006.375 - 13107.200: 96.6208% ( 21) 00:07:28.748 13107.200 - 13208.025: 96.7572% ( 22) 00:07:28.748 13208.025 - 13308.849: 96.8192% ( 10) 00:07:28.748 13308.849 - 13409.674: 96.8254% ( 1) 00:07:28.748 13611.323 - 13712.148: 96.8378% ( 2) 00:07:28.748 13712.148 - 13812.972: 96.8626% ( 4) 00:07:28.748 13812.972 - 13913.797: 96.8936% ( 5) 00:07:28.748 13913.797 - 14014.622: 96.9184% ( 4) 00:07:28.748 14014.622 - 14115.446: 96.9494% ( 5) 00:07:28.748 14115.446 - 14216.271: 97.0362% ( 14) 00:07:28.748 14216.271 - 14317.095: 97.1416% ( 17) 00:07:28.748 14317.095 - 14417.920: 97.3152% ( 28) 00:07:28.748 14417.920 - 14518.745: 97.5570% ( 39) 00:07:28.748 14518.745 - 14619.569: 97.7307% ( 28) 00:07:28.748 14619.569 - 14720.394: 97.9849% ( 41) 00:07:28.748 14720.394 - 14821.218: 98.2949% ( 50) 00:07:28.748 14821.218 - 14922.043: 98.4995% ( 33) 00:07:28.748 14922.043 - 15022.868: 98.7785% ( 45) 00:07:28.748 15022.868 - 15123.692: 98.8963% ( 19) 00:07:28.748 15123.692 - 15224.517: 98.9397% ( 7) 00:07:28.748 15224.517 - 15325.342: 98.9645% ( 4) 00:07:28.748 15325.342 - 15426.166: 98.9955% ( 5) 00:07:28.748 15426.166 - 15526.991: 99.0265% ( 5) 00:07:28.748 15526.991 - 15627.815: 99.0637% ( 6) 00:07:28.748 15627.815 - 15728.640: 99.0947% ( 5) 00:07:28.748 15728.640 - 15829.465: 99.1319% ( 6) 00:07:28.748 15829.465 - 15930.289: 99.1753% ( 7) 00:07:28.748 15930.289 - 16031.114: 99.2063% ( 5) 00:07:28.748 28230.892 - 28432.542: 99.2188% ( 2) 00:07:28.748 28432.542 - 28634.191: 99.2994% ( 13) 00:07:28.748 28634.191 - 28835.840: 99.3552% ( 9) 00:07:28.748 28835.840 - 29037.489: 99.4048% ( 8) 00:07:28.748 29037.489 - 29239.138: 99.4544% ( 8) 00:07:28.748 29239.138 - 29440.788: 99.5040% ( 8) 00:07:28.748 29440.788 - 29642.437: 99.5536% ( 8) 00:07:28.748 29642.437 - 29844.086: 99.6032% ( 8) 00:07:28.748 35086.966 - 35288.615: 99.6280% ( 4) 00:07:28.748 35288.615 - 35490.265: 99.7272% ( 16) 00:07:28.748 35490.265 - 35691.914: 99.8388% ( 18) 00:07:28.748 36498.511 - 36700.160: 99.8698% ( 5) 00:07:28.748 36700.160 - 36901.809: 99.9256% ( 9) 00:07:28.748 36901.809 - 37103.458: 99.9814% ( 9) 00:07:28.748 37103.458 - 37305.108: 100.0000% ( 3) 00:07:28.748 00:07:28.748 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:28.748 ============================================================================== 00:07:28.748 Range in us Cumulative IO count 00:07:28.748 6099.889 - 6125.095: 0.0124% ( 2) 00:07:28.748 6125.095 - 6150.302: 0.0310% ( 3) 00:07:28.748 6150.302 - 6175.508: 0.0496% ( 3) 00:07:28.748 6175.508 - 6200.714: 0.0620% ( 2) 00:07:28.748 6200.714 - 6225.920: 0.0806% ( 3) 00:07:28.748 6225.920 - 6251.126: 0.0992% ( 3) 00:07:28.748 6251.126 - 6276.332: 0.1240% ( 4) 00:07:28.748 6276.332 - 6301.538: 0.1612% ( 6) 00:07:28.748 6301.538 - 6326.745: 0.2294% ( 11) 00:07:28.748 6326.745 - 6351.951: 0.3472% ( 19) 00:07:28.748 6351.951 - 6377.157: 0.4278% ( 13) 00:07:28.748 6377.157 - 6402.363: 0.5394% ( 18) 00:07:28.748 6402.363 - 6427.569: 0.6448% ( 17) 00:07:28.748 6427.569 - 6452.775: 0.7750% ( 21) 00:07:28.748 6452.775 - 6503.188: 1.1905% ( 67) 00:07:28.748 6503.188 - 6553.600: 2.0461% ( 138) 00:07:28.748 6553.600 - 6604.012: 3.4288% ( 223) 00:07:28.748 6604.012 - 6654.425: 4.7681% ( 216) 00:07:28.748 6654.425 - 6704.837: 7.1739% ( 388) 00:07:28.748 6704.837 - 6755.249: 10.4043% ( 521) 00:07:28.748 6755.249 - 6805.662: 13.3371% ( 473) 00:07:28.748 6805.662 - 6856.074: 16.9457% ( 582) 00:07:28.748 6856.074 - 6906.486: 20.9387% ( 644) 00:07:28.748 6906.486 - 6956.898: 24.6838% ( 604) 00:07:28.748 6956.898 - 7007.311: 28.8318% ( 669) 00:07:28.748 7007.311 - 7057.723: 32.6327% ( 613) 00:07:28.748 7057.723 - 7108.135: 36.1235% ( 563) 00:07:28.748 7108.135 - 7158.548: 39.9740% ( 621) 00:07:28.748 7158.548 - 7208.960: 43.3160% ( 539) 00:07:28.748 7208.960 - 7259.372: 47.2904% ( 641) 00:07:28.748 7259.372 - 7309.785: 50.8619% ( 576) 00:07:28.748 7309.785 - 7360.197: 54.9107% ( 653) 00:07:28.748 7360.197 - 7410.609: 57.6761% ( 446) 00:07:28.748 7410.609 - 7461.022: 61.2909% ( 583) 00:07:28.748 7461.022 - 7511.434: 64.3663% ( 496) 00:07:28.748 7511.434 - 7561.846: 67.0201% ( 428) 00:07:28.748 7561.846 - 7612.258: 68.8306% ( 292) 00:07:28.748 7612.258 - 7662.671: 71.0007% ( 350) 00:07:28.748 7662.671 - 7713.083: 72.5446% ( 249) 00:07:28.748 7713.083 - 7763.495: 73.8839% ( 216) 00:07:28.748 7763.495 - 7813.908: 75.0186% ( 183) 00:07:28.748 7813.908 - 7864.320: 76.0789% ( 171) 00:07:28.748 7864.320 - 7914.732: 77.0337% ( 154) 00:07:28.748 7914.732 - 7965.145: 77.9390% ( 146) 00:07:28.748 7965.145 - 8015.557: 78.7946% ( 138) 00:07:28.748 8015.557 - 8065.969: 79.4457% ( 105) 00:07:28.748 8065.969 - 8116.382: 80.0657% ( 100) 00:07:28.748 8116.382 - 8166.794: 80.6982% ( 102) 00:07:28.748 8166.794 - 8217.206: 81.5228% ( 133) 00:07:28.748 8217.206 - 8267.618: 82.3165% ( 128) 00:07:28.748 8267.618 - 8318.031: 83.1597% ( 136) 00:07:28.748 8318.031 - 8368.443: 83.6186% ( 74) 00:07:28.748 8368.443 - 8418.855: 83.9968% ( 61) 00:07:28.748 8418.855 - 8469.268: 84.3254% ( 53) 00:07:28.748 8469.268 - 8519.680: 84.7532% ( 69) 00:07:28.748 8519.680 - 8570.092: 85.2431% ( 79) 00:07:28.748 8570.092 - 8620.505: 85.6957% ( 73) 00:07:28.748 8620.505 - 8670.917: 86.0243% ( 53) 00:07:28.748 8670.917 - 8721.329: 86.3095% ( 46) 00:07:28.748 8721.329 - 8771.742: 86.7932% ( 78) 00:07:28.748 8771.742 - 8822.154: 87.2396% ( 72) 00:07:28.748 8822.154 - 8872.566: 87.6178% ( 61) 00:07:28.748 8872.566 - 8922.978: 88.0518% ( 70) 00:07:28.748 8922.978 - 8973.391: 88.3805% ( 53) 00:07:28.748 8973.391 - 9023.803: 88.7029% ( 52) 00:07:28.748 9023.803 - 9074.215: 88.9571% ( 41) 00:07:28.748 9074.215 - 9124.628: 89.2485% ( 47) 00:07:28.748 9124.628 - 9175.040: 89.5833% ( 54) 00:07:28.748 9175.040 - 9225.452: 89.9802% ( 64) 00:07:28.748 9225.452 - 9275.865: 90.2096% ( 37) 00:07:28.748 9275.865 - 9326.277: 90.6994% ( 79) 00:07:28.748 9326.277 - 9376.689: 91.0714% ( 60) 00:07:28.748 9376.689 - 9427.102: 91.2946% ( 36) 00:07:28.748 9427.102 - 9477.514: 91.4373% ( 23) 00:07:28.748 9477.514 - 9527.926: 91.5427% ( 17) 00:07:28.748 9527.926 - 9578.338: 91.6791% ( 22) 00:07:28.748 9578.338 - 9628.751: 91.8527% ( 28) 00:07:28.748 9628.751 - 9679.163: 92.0077% ( 25) 00:07:28.748 9679.163 - 9729.575: 92.1379% ( 21) 00:07:28.748 9729.575 - 9779.988: 92.2619% ( 20) 00:07:28.748 9779.988 - 9830.400: 92.3549% ( 15) 00:07:28.748 9830.400 - 9880.812: 92.4541% ( 16) 00:07:28.748 9880.812 - 9931.225: 92.5905% ( 22) 00:07:28.748 9931.225 - 9981.637: 92.7889% ( 32) 00:07:28.748 9981.637 - 10032.049: 92.8757% ( 14) 00:07:28.748 10032.049 - 10082.462: 92.9439% ( 11) 00:07:28.748 10082.462 - 10132.874: 93.0494% ( 17) 00:07:28.748 10132.874 - 10183.286: 93.1176% ( 11) 00:07:28.748 10183.286 - 10233.698: 93.2168% ( 16) 00:07:28.748 10233.698 - 10284.111: 93.2788% ( 10) 00:07:28.748 10284.111 - 10334.523: 93.3346% ( 9) 00:07:28.748 10334.523 - 10384.935: 93.3842% ( 8) 00:07:28.748 10384.935 - 10435.348: 93.4338% ( 8) 00:07:28.748 10435.348 - 10485.760: 93.4958% ( 10) 00:07:28.748 10485.760 - 10536.172: 93.5454% ( 8) 00:07:28.748 10536.172 - 10586.585: 93.6074% ( 10) 00:07:28.748 10586.585 - 10636.997: 93.6570% ( 8) 00:07:28.748 10636.997 - 10687.409: 93.7066% ( 8) 00:07:28.748 10687.409 - 10737.822: 93.7562% ( 8) 00:07:28.748 10737.822 - 10788.234: 93.7996% ( 7) 00:07:28.748 10788.234 - 10838.646: 93.8492% ( 8) 00:07:28.748 10838.646 - 10889.058: 93.9174% ( 11) 00:07:28.748 10889.058 - 10939.471: 93.9670% ( 8) 00:07:28.748 10939.471 - 10989.883: 94.0042% ( 6) 00:07:28.748 10989.883 - 11040.295: 94.0290% ( 4) 00:07:28.748 11040.295 - 11090.708: 94.0786% ( 8) 00:07:28.748 11090.708 - 11141.120: 94.1282% ( 8) 00:07:28.748 11141.120 - 11191.532: 94.1778% ( 8) 00:07:28.748 11191.532 - 11241.945: 94.4072% ( 37) 00:07:28.748 11241.945 - 11292.357: 94.4692% ( 10) 00:07:28.748 11292.357 - 11342.769: 94.5250% ( 9) 00:07:28.748 11342.769 - 11393.182: 94.5809% ( 9) 00:07:28.748 11393.182 - 11443.594: 94.6429% ( 10) 00:07:28.748 11443.594 - 11494.006: 94.7855% ( 23) 00:07:28.748 11494.006 - 11544.418: 94.8723% ( 14) 00:07:28.748 11544.418 - 11594.831: 94.9157% ( 7) 00:07:28.748 11594.831 - 11645.243: 94.9467% ( 5) 00:07:28.748 11645.243 - 11695.655: 94.9777% ( 5) 00:07:28.748 11695.655 - 11746.068: 95.0149% ( 6) 00:07:28.748 11746.068 - 11796.480: 95.0707% ( 9) 00:07:28.748 11796.480 - 11846.892: 95.1513% ( 13) 00:07:28.748 11846.892 - 11897.305: 95.2939% ( 23) 00:07:28.748 11897.305 - 11947.717: 95.4861% ( 31) 00:07:28.748 11947.717 - 11998.129: 95.5605% ( 12) 00:07:28.748 11998.129 - 12048.542: 95.6163% ( 9) 00:07:28.749 12048.542 - 12098.954: 95.6907% ( 12) 00:07:28.749 12098.954 - 12149.366: 95.7713% ( 13) 00:07:28.749 12149.366 - 12199.778: 95.9821% ( 34) 00:07:28.749 12199.778 - 12250.191: 96.0379% ( 9) 00:07:28.749 12250.191 - 12300.603: 96.0875% ( 8) 00:07:28.749 12300.603 - 12351.015: 96.1434% ( 9) 00:07:28.749 12351.015 - 12401.428: 96.2302% ( 14) 00:07:28.749 12401.428 - 12451.840: 96.3294% ( 16) 00:07:28.749 12451.840 - 12502.252: 96.3604% ( 5) 00:07:28.749 12502.252 - 12552.665: 96.3852% ( 4) 00:07:28.749 12552.665 - 12603.077: 96.4348% ( 8) 00:07:28.749 12603.077 - 12653.489: 96.4658% ( 5) 00:07:28.749 12653.489 - 12703.902: 96.5030% ( 6) 00:07:28.749 12703.902 - 12754.314: 96.5216% ( 3) 00:07:28.749 12754.314 - 12804.726: 96.5402% ( 3) 00:07:28.749 12804.726 - 12855.138: 96.5588% ( 3) 00:07:28.749 12855.138 - 12905.551: 96.5712% ( 2) 00:07:28.749 12905.551 - 13006.375: 96.6580% ( 14) 00:07:28.749 13006.375 - 13107.200: 96.7572% ( 16) 00:07:28.749 13107.200 - 13208.025: 96.8068% ( 8) 00:07:28.749 13208.025 - 13308.849: 96.8254% ( 3) 00:07:28.749 13712.148 - 13812.972: 96.8440% ( 3) 00:07:28.749 13812.972 - 13913.797: 96.9742% ( 21) 00:07:28.749 13913.797 - 14014.622: 97.1292% ( 25) 00:07:28.749 14014.622 - 14115.446: 97.3338% ( 33) 00:07:28.749 14115.446 - 14216.271: 97.4330% ( 16) 00:07:28.749 14216.271 - 14317.095: 97.5508% ( 19) 00:07:28.749 14317.095 - 14417.920: 97.6376% ( 14) 00:07:28.749 14417.920 - 14518.745: 97.7679% ( 21) 00:07:28.749 14518.745 - 14619.569: 97.9043% ( 22) 00:07:28.749 14619.569 - 14720.394: 98.0097% ( 17) 00:07:28.749 14720.394 - 14821.218: 98.0965% ( 14) 00:07:28.749 14821.218 - 14922.043: 98.2019% ( 17) 00:07:28.749 14922.043 - 15022.868: 98.3135% ( 18) 00:07:28.749 15022.868 - 15123.692: 98.4375% ( 20) 00:07:28.749 15123.692 - 15224.517: 98.5553% ( 19) 00:07:28.749 15224.517 - 15325.342: 98.6917% ( 22) 00:07:28.749 15325.342 - 15426.166: 98.7661% ( 12) 00:07:28.749 15426.166 - 15526.991: 98.8219% ( 9) 00:07:28.749 15526.991 - 15627.815: 98.8963% ( 12) 00:07:28.749 15627.815 - 15728.640: 98.9273% ( 5) 00:07:28.749 15728.640 - 15829.465: 99.0017% ( 12) 00:07:28.749 15829.465 - 15930.289: 99.0885% ( 14) 00:07:28.749 15930.289 - 16031.114: 99.1505% ( 10) 00:07:28.749 16031.114 - 16131.938: 99.1815% ( 5) 00:07:28.749 16131.938 - 16232.763: 99.2063% ( 4) 00:07:28.749 26819.348 - 27020.997: 99.2250% ( 3) 00:07:28.749 27020.997 - 27222.646: 99.2684% ( 7) 00:07:28.749 27222.646 - 27424.295: 99.3242% ( 9) 00:07:28.749 27424.295 - 27625.945: 99.3738% ( 8) 00:07:28.749 27625.945 - 27827.594: 99.4234% ( 8) 00:07:28.749 27827.594 - 28029.243: 99.4730% ( 8) 00:07:28.749 28029.243 - 28230.892: 99.5288% ( 9) 00:07:28.749 28230.892 - 28432.542: 99.5784% ( 8) 00:07:28.749 28432.542 - 28634.191: 99.6032% ( 4) 00:07:28.749 34482.018 - 34683.668: 99.6156% ( 2) 00:07:28.749 34683.668 - 34885.317: 99.6528% ( 6) 00:07:28.749 34885.317 - 35086.966: 99.6962% ( 7) 00:07:28.749 35086.966 - 35288.615: 99.7768% ( 13) 00:07:28.749 35288.615 - 35490.265: 99.8264% ( 8) 00:07:28.749 35490.265 - 35691.914: 99.8822% ( 9) 00:07:28.749 35691.914 - 35893.563: 99.9318% ( 8) 00:07:28.749 35893.563 - 36095.212: 99.9876% ( 9) 00:07:28.749 36095.212 - 36296.862: 100.0000% ( 2) 00:07:28.749 00:07:28.749 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:28.749 ============================================================================== 00:07:28.749 Range in us Cumulative IO count 00:07:28.749 5847.828 - 5873.034: 0.0062% ( 1) 00:07:28.749 5873.034 - 5898.240: 0.0186% ( 2) 00:07:28.749 5898.240 - 5923.446: 0.0248% ( 1) 00:07:28.749 5923.446 - 5948.652: 0.0372% ( 2) 00:07:28.749 5948.652 - 5973.858: 0.0434% ( 1) 00:07:28.749 5973.858 - 5999.065: 0.0558% ( 2) 00:07:28.749 5999.065 - 6024.271: 0.0682% ( 2) 00:07:28.749 6024.271 - 6049.477: 0.0806% ( 2) 00:07:28.749 6049.477 - 6074.683: 0.1054% ( 4) 00:07:28.749 6074.683 - 6099.889: 0.1178% ( 2) 00:07:28.749 6099.889 - 6125.095: 0.1488% ( 5) 00:07:28.749 6125.095 - 6150.302: 0.1674% ( 3) 00:07:28.749 6150.302 - 6175.508: 0.1984% ( 5) 00:07:28.749 6175.508 - 6200.714: 0.2542% ( 9) 00:07:28.749 6200.714 - 6225.920: 0.3038% ( 8) 00:07:28.749 6225.920 - 6251.126: 0.3844% ( 13) 00:07:28.749 6251.126 - 6276.332: 0.4526% ( 11) 00:07:28.749 6276.332 - 6301.538: 0.5394% ( 14) 00:07:28.749 6301.538 - 6326.745: 0.7502% ( 34) 00:07:28.749 6326.745 - 6351.951: 0.9673% ( 35) 00:07:28.749 6351.951 - 6377.157: 1.3083% ( 55) 00:07:28.749 6377.157 - 6402.363: 1.7671% ( 74) 00:07:28.749 6402.363 - 6427.569: 2.1267% ( 58) 00:07:28.749 6427.569 - 6452.775: 2.6352% ( 82) 00:07:28.749 6452.775 - 6503.188: 3.4660% ( 134) 00:07:28.749 6503.188 - 6553.600: 5.0533% ( 256) 00:07:28.749 6553.600 - 6604.012: 6.8576% ( 291) 00:07:28.749 6604.012 - 6654.425: 8.7860% ( 311) 00:07:28.749 6654.425 - 6704.837: 11.0615% ( 367) 00:07:28.749 6704.837 - 6755.249: 13.7897% ( 440) 00:07:28.749 6755.249 - 6805.662: 16.6915% ( 468) 00:07:28.749 6805.662 - 6856.074: 19.9033% ( 518) 00:07:28.749 6856.074 - 6906.486: 22.8423% ( 474) 00:07:28.749 6906.486 - 6956.898: 26.0417% ( 516) 00:07:28.749 6956.898 - 7007.311: 29.8177% ( 609) 00:07:28.749 7007.311 - 7057.723: 33.2341% ( 551) 00:07:28.749 7057.723 - 7108.135: 36.8676% ( 586) 00:07:28.749 7108.135 - 7158.548: 40.3522% ( 562) 00:07:28.749 7158.548 - 7208.960: 43.6818% ( 537) 00:07:28.749 7208.960 - 7259.372: 47.0920% ( 550) 00:07:28.749 7259.372 - 7309.785: 50.5952% ( 565) 00:07:28.749 7309.785 - 7360.197: 53.6024% ( 485) 00:07:28.749 7360.197 - 7410.609: 56.4856% ( 465) 00:07:28.749 7410.609 - 7461.022: 59.3998% ( 470) 00:07:28.749 7461.022 - 7511.434: 62.1156% ( 438) 00:07:28.749 7511.434 - 7561.846: 64.7631% ( 427) 00:07:28.749 7561.846 - 7612.258: 67.0697% ( 372) 00:07:28.749 7612.258 - 7662.671: 69.0290% ( 316) 00:07:28.749 7662.671 - 7713.083: 70.9635% ( 312) 00:07:28.749 7713.083 - 7763.495: 72.4020% ( 232) 00:07:28.749 7763.495 - 7813.908: 73.8157% ( 228) 00:07:28.749 7813.908 - 7864.320: 75.0620% ( 201) 00:07:28.749 7864.320 - 7914.732: 76.1967% ( 183) 00:07:28.749 7914.732 - 7965.145: 77.3313% ( 183) 00:07:28.749 7965.145 - 8015.557: 78.2614% ( 150) 00:07:28.749 8015.557 - 8065.969: 79.1853% ( 149) 00:07:28.749 8065.969 - 8116.382: 79.9479% ( 123) 00:07:28.749 8116.382 - 8166.794: 80.6424% ( 112) 00:07:28.749 8166.794 - 8217.206: 81.2190% ( 93) 00:07:28.749 8217.206 - 8267.618: 81.7646% ( 88) 00:07:28.749 8267.618 - 8318.031: 82.2359% ( 76) 00:07:28.749 8318.031 - 8368.443: 82.8931% ( 106) 00:07:28.749 8368.443 - 8418.855: 83.3209% ( 69) 00:07:28.749 8418.855 - 8469.268: 83.7302% ( 66) 00:07:28.749 8469.268 - 8519.680: 84.0588% ( 53) 00:07:28.749 8519.680 - 8570.092: 84.4432% ( 62) 00:07:28.749 8570.092 - 8620.505: 84.8772% ( 70) 00:07:28.749 8620.505 - 8670.917: 85.4043% ( 85) 00:07:28.749 8670.917 - 8721.329: 85.8631% ( 74) 00:07:28.749 8721.329 - 8771.742: 86.3777% ( 83) 00:07:28.749 8771.742 - 8822.154: 86.9420% ( 91) 00:07:28.749 8822.154 - 8872.566: 87.3760% ( 70) 00:07:28.749 8872.566 - 8922.978: 87.7666% ( 63) 00:07:28.749 8922.978 - 8973.391: 88.1138% ( 56) 00:07:28.749 8973.391 - 9023.803: 88.4053% ( 47) 00:07:28.749 9023.803 - 9074.215: 88.6099% ( 33) 00:07:28.749 9074.215 - 9124.628: 89.0253% ( 67) 00:07:28.749 9124.628 - 9175.040: 89.3167% ( 47) 00:07:28.749 9175.040 - 9225.452: 89.6143% ( 48) 00:07:28.749 9225.452 - 9275.865: 89.9058% ( 47) 00:07:28.749 9275.865 - 9326.277: 90.2034% ( 48) 00:07:28.749 9326.277 - 9376.689: 90.4638% ( 42) 00:07:28.749 9376.689 - 9427.102: 90.7180% ( 41) 00:07:28.749 9427.102 - 9477.514: 90.8978% ( 29) 00:07:28.749 9477.514 - 9527.926: 91.1768% ( 45) 00:07:28.749 9527.926 - 9578.338: 91.4125% ( 38) 00:07:28.749 9578.338 - 9628.751: 91.5985% ( 30) 00:07:28.749 9628.751 - 9679.163: 91.8031% ( 33) 00:07:28.749 9679.163 - 9729.575: 91.9705% ( 27) 00:07:28.749 9729.575 - 9779.988: 92.2247% ( 41) 00:07:28.749 9779.988 - 9830.400: 92.4479% ( 36) 00:07:28.749 9830.400 - 9880.812: 92.6277% ( 29) 00:07:28.749 9880.812 - 9931.225: 92.8075% ( 29) 00:07:28.749 9931.225 - 9981.637: 92.9936% ( 30) 00:07:28.749 9981.637 - 10032.049: 93.0990% ( 17) 00:07:28.749 10032.049 - 10082.462: 93.1486% ( 8) 00:07:28.749 10082.462 - 10132.874: 93.2230% ( 12) 00:07:28.749 10132.874 - 10183.286: 93.2664% ( 7) 00:07:28.749 10183.286 - 10233.698: 93.3098% ( 7) 00:07:28.749 10233.698 - 10284.111: 93.3780% ( 11) 00:07:28.749 10284.111 - 10334.523: 93.4214% ( 7) 00:07:28.749 10334.523 - 10384.935: 93.4896% ( 11) 00:07:28.749 10384.935 - 10435.348: 93.5516% ( 10) 00:07:28.749 10435.348 - 10485.760: 93.5888% ( 6) 00:07:28.749 10485.760 - 10536.172: 93.6570% ( 11) 00:07:28.749 10536.172 - 10586.585: 93.7190% ( 10) 00:07:28.749 10586.585 - 10636.997: 93.7810% ( 10) 00:07:28.749 10636.997 - 10687.409: 93.8120% ( 5) 00:07:28.749 10687.409 - 10737.822: 93.8554% ( 7) 00:07:28.749 10737.822 - 10788.234: 93.8802% ( 4) 00:07:28.749 10788.234 - 10838.646: 93.9174% ( 6) 00:07:28.749 10838.646 - 10889.058: 93.9918% ( 12) 00:07:28.749 10889.058 - 10939.471: 94.0228% ( 5) 00:07:28.749 10939.471 - 10989.883: 94.0538% ( 5) 00:07:28.749 10989.883 - 11040.295: 94.0848% ( 5) 00:07:28.749 11040.295 - 11090.708: 94.1344% ( 8) 00:07:28.749 11090.708 - 11141.120: 94.1778% ( 7) 00:07:28.749 11141.120 - 11191.532: 94.2212% ( 7) 00:07:28.749 11191.532 - 11241.945: 94.2646% ( 7) 00:07:28.749 11241.945 - 11292.357: 94.3638% ( 16) 00:07:28.749 11292.357 - 11342.769: 94.4196% ( 9) 00:07:28.749 11342.769 - 11393.182: 94.4692% ( 8) 00:07:28.749 11393.182 - 11443.594: 94.5312% ( 10) 00:07:28.750 11443.594 - 11494.006: 94.6119% ( 13) 00:07:28.750 11494.006 - 11544.418: 94.7111% ( 16) 00:07:28.750 11544.418 - 11594.831: 94.7669% ( 9) 00:07:28.750 11594.831 - 11645.243: 94.8599% ( 15) 00:07:28.750 11645.243 - 11695.655: 94.9281% ( 11) 00:07:28.750 11695.655 - 11746.068: 95.0087% ( 13) 00:07:28.750 11746.068 - 11796.480: 95.0893% ( 13) 00:07:28.750 11796.480 - 11846.892: 95.1761% ( 14) 00:07:28.750 11846.892 - 11897.305: 95.3125% ( 22) 00:07:28.750 11897.305 - 11947.717: 95.3683% ( 9) 00:07:28.750 11947.717 - 11998.129: 95.4489% ( 13) 00:07:28.750 11998.129 - 12048.542: 95.4923% ( 7) 00:07:28.750 12048.542 - 12098.954: 95.5419% ( 8) 00:07:28.750 12098.954 - 12149.366: 95.5853% ( 7) 00:07:28.750 12149.366 - 12199.778: 95.6473% ( 10) 00:07:28.750 12199.778 - 12250.191: 95.6969% ( 8) 00:07:28.750 12250.191 - 12300.603: 95.7589% ( 10) 00:07:28.750 12300.603 - 12351.015: 95.8519% ( 15) 00:07:28.750 12351.015 - 12401.428: 95.9449% ( 15) 00:07:28.750 12401.428 - 12451.840: 96.0255% ( 13) 00:07:28.750 12451.840 - 12502.252: 96.0503% ( 4) 00:07:28.750 12502.252 - 12552.665: 96.0689% ( 3) 00:07:28.750 12552.665 - 12603.077: 96.0813% ( 2) 00:07:28.750 12603.077 - 12653.489: 96.1186% ( 6) 00:07:28.750 12653.489 - 12703.902: 96.1620% ( 7) 00:07:28.750 12703.902 - 12754.314: 96.2116% ( 8) 00:07:28.750 12754.314 - 12804.726: 96.3046% ( 15) 00:07:28.750 12804.726 - 12855.138: 96.3418% ( 6) 00:07:28.750 12855.138 - 12905.551: 96.3790% ( 6) 00:07:28.750 12905.551 - 13006.375: 96.4844% ( 17) 00:07:28.750 13006.375 - 13107.200: 96.5898% ( 17) 00:07:28.750 13107.200 - 13208.025: 96.8006% ( 34) 00:07:28.750 13208.025 - 13308.849: 96.8874% ( 14) 00:07:28.750 13308.849 - 13409.674: 96.9184% ( 5) 00:07:28.750 13409.674 - 13510.498: 96.9866% ( 11) 00:07:28.750 13510.498 - 13611.323: 97.0672% ( 13) 00:07:28.750 13611.323 - 13712.148: 97.0920% ( 4) 00:07:28.750 13712.148 - 13812.972: 97.2408% ( 24) 00:07:28.750 13812.972 - 13913.797: 97.3710% ( 21) 00:07:28.750 13913.797 - 14014.622: 97.5322% ( 26) 00:07:28.750 14014.622 - 14115.446: 97.6625% ( 21) 00:07:28.750 14115.446 - 14216.271: 97.7245% ( 10) 00:07:28.750 14216.271 - 14317.095: 97.7741% ( 8) 00:07:28.750 14317.095 - 14417.920: 97.8423% ( 11) 00:07:28.750 14417.920 - 14518.745: 97.9043% ( 10) 00:07:28.750 14518.745 - 14619.569: 98.0221% ( 19) 00:07:28.750 14619.569 - 14720.394: 98.1275% ( 17) 00:07:28.750 14720.394 - 14821.218: 98.2081% ( 13) 00:07:28.750 14821.218 - 14922.043: 98.3011% ( 15) 00:07:28.750 14922.043 - 15022.868: 98.3693% ( 11) 00:07:28.750 15022.868 - 15123.692: 98.4995% ( 21) 00:07:28.750 15123.692 - 15224.517: 98.5677% ( 11) 00:07:28.750 15224.517 - 15325.342: 98.5801% ( 2) 00:07:28.750 15325.342 - 15426.166: 98.6483% ( 11) 00:07:28.750 15426.166 - 15526.991: 98.7413% ( 15) 00:07:28.750 15526.991 - 15627.815: 98.8157% ( 12) 00:07:28.750 15627.815 - 15728.640: 98.8715% ( 9) 00:07:28.750 15728.640 - 15829.465: 98.9335% ( 10) 00:07:28.750 15829.465 - 15930.289: 98.9707% ( 6) 00:07:28.750 15930.289 - 16031.114: 98.9955% ( 4) 00:07:28.750 16031.114 - 16131.938: 99.0513% ( 9) 00:07:28.750 16232.763 - 16333.588: 99.0885% ( 6) 00:07:28.750 16333.588 - 16434.412: 99.1319% ( 7) 00:07:28.750 16434.412 - 16535.237: 99.1567% ( 4) 00:07:28.750 16535.237 - 16636.062: 99.1815% ( 4) 00:07:28.750 16636.062 - 16736.886: 99.2063% ( 4) 00:07:28.750 25710.277 - 25811.102: 99.2188% ( 2) 00:07:28.750 25811.102 - 26012.751: 99.2560% ( 6) 00:07:28.750 26012.751 - 26214.400: 99.3180% ( 10) 00:07:28.750 26214.400 - 26416.049: 99.3614% ( 7) 00:07:28.750 26416.049 - 26617.698: 99.3986% ( 6) 00:07:28.750 26617.698 - 26819.348: 99.4482% ( 8) 00:07:28.750 26819.348 - 27020.997: 99.4978% ( 8) 00:07:28.750 27020.997 - 27222.646: 99.5350% ( 6) 00:07:28.750 27222.646 - 27424.295: 99.5722% ( 6) 00:07:28.750 27424.295 - 27625.945: 99.6032% ( 5) 00:07:28.750 32465.526 - 32667.175: 99.6156% ( 2) 00:07:28.750 32667.175 - 32868.825: 99.6404% ( 4) 00:07:28.750 32868.825 - 33070.474: 99.6838% ( 7) 00:07:28.750 33070.474 - 33272.123: 99.7272% ( 7) 00:07:28.750 33272.123 - 33473.772: 99.7768% ( 8) 00:07:28.750 33473.772 - 33675.422: 99.8140% ( 6) 00:07:28.750 33675.422 - 33877.071: 99.8636% ( 8) 00:07:28.750 33877.071 - 34078.720: 99.9008% ( 6) 00:07:28.750 34078.720 - 34280.369: 99.9628% ( 10) 00:07:28.750 34280.369 - 34482.018: 100.0000% ( 6) 00:07:28.750 00:07:28.750 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:28.750 ============================================================================== 00:07:28.750 Range in us Cumulative IO count 00:07:28.750 6074.683 - 6099.889: 0.0062% ( 1) 00:07:28.750 6175.508 - 6200.714: 0.0124% ( 1) 00:07:28.750 6200.714 - 6225.920: 0.0434% ( 5) 00:07:28.750 6225.920 - 6251.126: 0.0806% ( 6) 00:07:28.750 6251.126 - 6276.332: 0.1302% ( 8) 00:07:28.750 6276.332 - 6301.538: 0.1736% ( 7) 00:07:28.750 6301.538 - 6326.745: 0.2542% ( 13) 00:07:28.750 6326.745 - 6351.951: 0.3782% ( 20) 00:07:28.750 6351.951 - 6377.157: 0.5642% ( 30) 00:07:28.750 6377.157 - 6402.363: 0.7254% ( 26) 00:07:28.750 6402.363 - 6427.569: 0.8867% ( 26) 00:07:28.750 6427.569 - 6452.775: 1.1595% ( 44) 00:07:28.750 6452.775 - 6503.188: 1.7981% ( 103) 00:07:28.750 6503.188 - 6553.600: 2.6910% ( 144) 00:07:28.750 6553.600 - 6604.012: 3.9683% ( 206) 00:07:28.750 6604.012 - 6654.425: 5.5060% ( 248) 00:07:28.750 6654.425 - 6704.837: 7.7071% ( 355) 00:07:28.750 6704.837 - 6755.249: 10.1252% ( 390) 00:07:28.750 6755.249 - 6805.662: 13.2068% ( 497) 00:07:28.750 6805.662 - 6856.074: 16.4621% ( 525) 00:07:28.750 6856.074 - 6906.486: 20.0273% ( 575) 00:07:28.750 6906.486 - 6956.898: 23.5057% ( 561) 00:07:28.750 6956.898 - 7007.311: 27.6538% ( 669) 00:07:28.750 7007.311 - 7057.723: 32.1615% ( 727) 00:07:28.750 7057.723 - 7108.135: 37.0040% ( 781) 00:07:28.750 7108.135 - 7158.548: 41.0032% ( 645) 00:07:28.750 7158.548 - 7208.960: 44.6367% ( 586) 00:07:28.750 7208.960 - 7259.372: 48.6173% ( 642) 00:07:28.750 7259.372 - 7309.785: 52.9266% ( 695) 00:07:28.750 7309.785 - 7360.197: 56.2810% ( 541) 00:07:28.750 7360.197 - 7410.609: 59.3626% ( 497) 00:07:28.750 7410.609 - 7461.022: 62.1776% ( 454) 00:07:28.750 7461.022 - 7511.434: 64.7631% ( 417) 00:07:28.750 7511.434 - 7561.846: 67.4913% ( 440) 00:07:28.750 7561.846 - 7612.258: 69.9777% ( 401) 00:07:28.750 7612.258 - 7662.671: 71.7138% ( 280) 00:07:28.750 7662.671 - 7713.083: 73.3073% ( 257) 00:07:28.750 7713.083 - 7763.495: 74.4606% ( 186) 00:07:28.750 7763.495 - 7813.908: 75.5518% ( 176) 00:07:28.750 7813.908 - 7864.320: 76.4137% ( 139) 00:07:28.750 7864.320 - 7914.732: 77.0833% ( 108) 00:07:28.750 7914.732 - 7965.145: 77.9266% ( 136) 00:07:28.750 7965.145 - 8015.557: 78.5776% ( 105) 00:07:28.750 8015.557 - 8065.969: 79.2163% ( 103) 00:07:28.750 8065.969 - 8116.382: 79.9417% ( 117) 00:07:28.750 8116.382 - 8166.794: 80.8346% ( 144) 00:07:28.750 8166.794 - 8217.206: 81.5662% ( 118) 00:07:28.750 8217.206 - 8267.618: 82.2793% ( 115) 00:07:28.750 8267.618 - 8318.031: 82.7877% ( 82) 00:07:28.750 8318.031 - 8368.443: 83.4449% ( 106) 00:07:28.750 8368.443 - 8418.855: 83.9038% ( 74) 00:07:28.750 8418.855 - 8469.268: 84.2696% ( 59) 00:07:28.750 8469.268 - 8519.680: 84.6788% ( 66) 00:07:28.750 8519.680 - 8570.092: 85.0508% ( 60) 00:07:28.750 8570.092 - 8620.505: 85.4105% ( 58) 00:07:28.750 8620.505 - 8670.917: 85.6957% ( 46) 00:07:28.750 8670.917 - 8721.329: 85.9561% ( 42) 00:07:28.750 8721.329 - 8771.742: 86.3281% ( 60) 00:07:28.750 8771.742 - 8822.154: 86.6567% ( 53) 00:07:28.750 8822.154 - 8872.566: 87.0102% ( 57) 00:07:28.750 8872.566 - 8922.978: 87.3760% ( 59) 00:07:28.750 8922.978 - 8973.391: 87.7542% ( 61) 00:07:28.750 8973.391 - 9023.803: 88.1262% ( 60) 00:07:28.750 9023.803 - 9074.215: 88.5169% ( 63) 00:07:28.750 9074.215 - 9124.628: 88.8145% ( 48) 00:07:28.750 9124.628 - 9175.040: 89.2175% ( 65) 00:07:28.750 9175.040 - 9225.452: 89.6081% ( 63) 00:07:28.750 9225.452 - 9275.865: 89.8686% ( 42) 00:07:28.750 9275.865 - 9326.277: 90.1662% ( 48) 00:07:28.750 9326.277 - 9376.689: 90.4018% ( 38) 00:07:28.750 9376.689 - 9427.102: 90.6374% ( 38) 00:07:28.750 9427.102 - 9477.514: 90.8668% ( 37) 00:07:28.750 9477.514 - 9527.926: 91.0962% ( 37) 00:07:28.750 9527.926 - 9578.338: 91.3256% ( 37) 00:07:28.750 9578.338 - 9628.751: 91.7039% ( 61) 00:07:28.750 9628.751 - 9679.163: 91.8527% ( 24) 00:07:28.750 9679.163 - 9729.575: 91.9953% ( 23) 00:07:28.750 9729.575 - 9779.988: 92.1441% ( 24) 00:07:28.750 9779.988 - 9830.400: 92.3115% ( 27) 00:07:28.750 9830.400 - 9880.812: 92.4417% ( 21) 00:07:28.750 9880.812 - 9931.225: 92.5781% ( 22) 00:07:28.750 9931.225 - 9981.637: 92.7455% ( 27) 00:07:28.750 9981.637 - 10032.049: 92.9067% ( 26) 00:07:28.750 10032.049 - 10082.462: 93.1238% ( 35) 00:07:28.750 10082.462 - 10132.874: 93.2292% ( 17) 00:07:28.750 10132.874 - 10183.286: 93.3470% ( 19) 00:07:28.750 10183.286 - 10233.698: 93.4524% ( 17) 00:07:28.750 10233.698 - 10284.111: 93.6260% ( 28) 00:07:28.750 10284.111 - 10334.523: 93.7438% ( 19) 00:07:28.750 10334.523 - 10384.935: 93.7810% ( 6) 00:07:28.750 10384.935 - 10435.348: 93.8244% ( 7) 00:07:28.750 10435.348 - 10485.760: 93.8616% ( 6) 00:07:28.750 10485.760 - 10536.172: 93.8802% ( 3) 00:07:28.750 10536.172 - 10586.585: 93.9236% ( 7) 00:07:28.750 10586.585 - 10636.997: 93.9732% ( 8) 00:07:28.750 10636.997 - 10687.409: 94.0042% ( 5) 00:07:28.750 10687.409 - 10737.822: 94.0228% ( 3) 00:07:28.750 10737.822 - 10788.234: 94.0538% ( 5) 00:07:28.750 10788.234 - 10838.646: 94.0786% ( 4) 00:07:28.750 10838.646 - 10889.058: 94.1158% ( 6) 00:07:28.750 10889.058 - 10939.471: 94.1468% ( 5) 00:07:28.751 10939.471 - 10989.883: 94.1840% ( 6) 00:07:28.751 10989.883 - 11040.295: 94.2150% ( 5) 00:07:28.751 11040.295 - 11090.708: 94.2398% ( 4) 00:07:28.751 11090.708 - 11141.120: 94.2770% ( 6) 00:07:28.751 11141.120 - 11191.532: 94.3080% ( 5) 00:07:28.751 11191.532 - 11241.945: 94.3328% ( 4) 00:07:28.751 11241.945 - 11292.357: 94.3576% ( 4) 00:07:28.751 11292.357 - 11342.769: 94.3762% ( 3) 00:07:28.751 11342.769 - 11393.182: 94.4506% ( 12) 00:07:28.751 11393.182 - 11443.594: 94.4940% ( 7) 00:07:28.751 11443.594 - 11494.006: 94.5685% ( 12) 00:07:28.751 11494.006 - 11544.418: 94.6429% ( 12) 00:07:28.751 11544.418 - 11594.831: 94.7173% ( 12) 00:07:28.751 11594.831 - 11645.243: 94.8537% ( 22) 00:07:28.751 11645.243 - 11695.655: 94.9219% ( 11) 00:07:28.751 11695.655 - 11746.068: 94.9963% ( 12) 00:07:28.751 11746.068 - 11796.480: 95.0707% ( 12) 00:07:28.751 11796.480 - 11846.892: 95.1327% ( 10) 00:07:28.751 11846.892 - 11897.305: 95.1947% ( 10) 00:07:28.751 11897.305 - 11947.717: 95.2505% ( 9) 00:07:28.751 11947.717 - 11998.129: 95.3249% ( 12) 00:07:28.751 11998.129 - 12048.542: 95.3931% ( 11) 00:07:28.751 12048.542 - 12098.954: 95.4489% ( 9) 00:07:28.751 12098.954 - 12149.366: 95.4985% ( 8) 00:07:28.751 12149.366 - 12199.778: 95.5853% ( 14) 00:07:28.751 12199.778 - 12250.191: 95.6535% ( 11) 00:07:28.751 12250.191 - 12300.603: 95.6969% ( 7) 00:07:28.751 12300.603 - 12351.015: 95.7341% ( 6) 00:07:28.751 12351.015 - 12401.428: 95.7651% ( 5) 00:07:28.751 12401.428 - 12451.840: 95.8271% ( 10) 00:07:28.751 12451.840 - 12502.252: 95.8891% ( 10) 00:07:28.751 12502.252 - 12552.665: 95.9759% ( 14) 00:07:28.751 12552.665 - 12603.077: 96.1186% ( 23) 00:07:28.751 12603.077 - 12653.489: 96.2302% ( 18) 00:07:28.751 12653.489 - 12703.902: 96.3232% ( 15) 00:07:28.751 12703.902 - 12754.314: 96.4038% ( 13) 00:07:28.751 12754.314 - 12804.726: 96.4596% ( 9) 00:07:28.751 12804.726 - 12855.138: 96.5216% ( 10) 00:07:28.751 12855.138 - 12905.551: 96.5774% ( 9) 00:07:28.751 12905.551 - 13006.375: 96.6084% ( 5) 00:07:28.751 13006.375 - 13107.200: 96.6456% ( 6) 00:07:28.751 13107.200 - 13208.025: 96.6952% ( 8) 00:07:28.751 13208.025 - 13308.849: 96.8130% ( 19) 00:07:28.751 13308.849 - 13409.674: 96.9494% ( 22) 00:07:28.751 13409.674 - 13510.498: 97.0796% ( 21) 00:07:28.751 13510.498 - 13611.323: 97.2532% ( 28) 00:07:28.751 13611.323 - 13712.148: 97.4144% ( 26) 00:07:28.751 13712.148 - 13812.972: 97.5694% ( 25) 00:07:28.751 13812.972 - 13913.797: 97.6749% ( 17) 00:07:28.751 13913.797 - 14014.622: 97.7741% ( 16) 00:07:28.751 14014.622 - 14115.446: 97.8237% ( 8) 00:07:28.751 14115.446 - 14216.271: 97.8609% ( 6) 00:07:28.751 14216.271 - 14317.095: 97.8919% ( 5) 00:07:28.751 14317.095 - 14417.920: 97.9291% ( 6) 00:07:28.751 14417.920 - 14518.745: 97.9601% ( 5) 00:07:28.751 14518.745 - 14619.569: 98.0097% ( 8) 00:07:28.751 14619.569 - 14720.394: 98.0655% ( 9) 00:07:28.751 14720.394 - 14821.218: 98.1523% ( 14) 00:07:28.751 14821.218 - 14922.043: 98.3011% ( 24) 00:07:28.751 14922.043 - 15022.868: 98.4499% ( 24) 00:07:28.751 15022.868 - 15123.692: 98.5491% ( 16) 00:07:28.751 15123.692 - 15224.517: 98.5925% ( 7) 00:07:28.751 15224.517 - 15325.342: 98.6297% ( 6) 00:07:28.751 15325.342 - 15426.166: 98.6545% ( 4) 00:07:28.751 15426.166 - 15526.991: 98.6855% ( 5) 00:07:28.751 15526.991 - 15627.815: 98.7165% ( 5) 00:07:28.751 15627.815 - 15728.640: 98.7475% ( 5) 00:07:28.751 15728.640 - 15829.465: 98.7785% ( 5) 00:07:28.751 15829.465 - 15930.289: 98.8467% ( 11) 00:07:28.751 15930.289 - 16031.114: 98.8777% ( 5) 00:07:28.751 16031.114 - 16131.938: 98.9149% ( 6) 00:07:28.751 16131.938 - 16232.763: 98.9521% ( 6) 00:07:28.751 16232.763 - 16333.588: 98.9831% ( 5) 00:07:28.751 16333.588 - 16434.412: 99.0141% ( 5) 00:07:28.751 16434.412 - 16535.237: 99.0513% ( 6) 00:07:28.751 16535.237 - 16636.062: 99.0885% ( 6) 00:07:28.751 16636.062 - 16736.886: 99.1257% ( 6) 00:07:28.751 16736.886 - 16837.711: 99.1629% ( 6) 00:07:28.751 16837.711 - 16938.535: 99.2001% ( 6) 00:07:28.751 16938.535 - 17039.360: 99.2063% ( 1) 00:07:28.751 24399.557 - 24500.382: 99.2188% ( 2) 00:07:28.751 24500.382 - 24601.206: 99.2436% ( 4) 00:07:28.751 24601.206 - 24702.031: 99.2684% ( 4) 00:07:28.751 24702.031 - 24802.855: 99.2870% ( 3) 00:07:28.751 24802.855 - 24903.680: 99.3180% ( 5) 00:07:28.751 24903.680 - 25004.505: 99.3428% ( 4) 00:07:28.751 25004.505 - 25105.329: 99.3676% ( 4) 00:07:28.751 25105.329 - 25206.154: 99.3862% ( 3) 00:07:28.751 25206.154 - 25306.978: 99.4172% ( 5) 00:07:28.751 25306.978 - 25407.803: 99.4420% ( 4) 00:07:28.751 25407.803 - 25508.628: 99.4668% ( 4) 00:07:28.751 25508.628 - 25609.452: 99.4916% ( 4) 00:07:28.751 25609.452 - 25710.277: 99.5164% ( 4) 00:07:28.751 25710.277 - 25811.102: 99.5412% ( 4) 00:07:28.751 25811.102 - 26012.751: 99.5970% ( 9) 00:07:28.751 26012.751 - 26214.400: 99.6032% ( 1) 00:07:28.751 31255.631 - 31457.280: 99.6528% ( 8) 00:07:28.751 31457.280 - 31658.929: 99.7024% ( 8) 00:07:28.751 31658.929 - 31860.578: 99.7520% ( 8) 00:07:28.751 31860.578 - 32062.228: 99.8016% ( 8) 00:07:28.751 32062.228 - 32263.877: 99.8512% ( 8) 00:07:28.751 32263.877 - 32465.526: 99.9008% ( 8) 00:07:28.751 32465.526 - 32667.175: 99.9566% ( 9) 00:07:28.751 32667.175 - 32868.825: 100.0000% ( 7) 00:07:28.751 00:07:28.751 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:28.751 ============================================================================== 00:07:28.751 Range in us Cumulative IO count 00:07:28.751 6125.095 - 6150.302: 0.0062% ( 1) 00:07:28.751 6150.302 - 6175.508: 0.0124% ( 1) 00:07:28.751 6175.508 - 6200.714: 0.0248% ( 2) 00:07:28.751 6200.714 - 6225.920: 0.0372% ( 2) 00:07:28.751 6251.126 - 6276.332: 0.0496% ( 2) 00:07:28.751 6301.538 - 6326.745: 0.0620% ( 2) 00:07:28.751 6326.745 - 6351.951: 0.0806% ( 3) 00:07:28.751 6351.951 - 6377.157: 0.1364% ( 9) 00:07:28.751 6377.157 - 6402.363: 0.2232% ( 14) 00:07:28.751 6402.363 - 6427.569: 0.3410% ( 19) 00:07:28.751 6427.569 - 6452.775: 0.4774% ( 22) 00:07:28.751 6452.775 - 6503.188: 0.8929% ( 67) 00:07:28.751 6503.188 - 6553.600: 1.5687% ( 109) 00:07:28.751 6553.600 - 6604.012: 2.8026% ( 199) 00:07:28.751 6604.012 - 6654.425: 4.2473% ( 233) 00:07:28.751 6654.425 - 6704.837: 6.6716% ( 391) 00:07:28.751 6704.837 - 6755.249: 9.7408% ( 495) 00:07:28.751 6755.249 - 6805.662: 13.4239% ( 594) 00:07:28.751 6805.662 - 6856.074: 17.5347% ( 663) 00:07:28.751 6856.074 - 6906.486: 21.6084% ( 657) 00:07:28.751 6906.486 - 6956.898: 25.1922% ( 578) 00:07:28.751 6956.898 - 7007.311: 28.8194% ( 585) 00:07:28.751 7007.311 - 7057.723: 33.1411% ( 697) 00:07:28.751 7057.723 - 7108.135: 37.1776% ( 651) 00:07:28.751 7108.135 - 7158.548: 41.0838% ( 630) 00:07:28.751 7158.548 - 7208.960: 44.8165% ( 602) 00:07:28.751 7208.960 - 7259.372: 48.9645% ( 669) 00:07:28.751 7259.372 - 7309.785: 52.8088% ( 620) 00:07:28.751 7309.785 - 7360.197: 56.3244% ( 567) 00:07:28.751 7360.197 - 7410.609: 59.4122% ( 498) 00:07:28.751 7410.609 - 7461.022: 62.6736% ( 526) 00:07:28.751 7461.022 - 7511.434: 65.4390% ( 446) 00:07:28.751 7511.434 - 7561.846: 67.8571% ( 390) 00:07:28.751 7561.846 - 7612.258: 69.6987% ( 297) 00:07:28.751 7612.258 - 7662.671: 71.4534% ( 283) 00:07:28.751 7662.671 - 7713.083: 72.7679% ( 212) 00:07:28.751 7713.083 - 7763.495: 74.0265% ( 203) 00:07:28.751 7763.495 - 7813.908: 75.5580% ( 247) 00:07:28.751 7813.908 - 7864.320: 76.6865% ( 182) 00:07:28.751 7864.320 - 7914.732: 77.5608% ( 141) 00:07:28.751 7914.732 - 7965.145: 78.4226% ( 139) 00:07:28.751 7965.145 - 8015.557: 79.0365% ( 99) 00:07:28.751 8015.557 - 8065.969: 79.7929% ( 122) 00:07:28.751 8065.969 - 8116.382: 80.4129% ( 100) 00:07:28.751 8116.382 - 8166.794: 80.9090% ( 80) 00:07:28.751 8166.794 - 8217.206: 81.5166% ( 98) 00:07:28.751 8217.206 - 8267.618: 82.2297% ( 115) 00:07:28.751 8267.618 - 8318.031: 82.7753% ( 88) 00:07:28.751 8318.031 - 8368.443: 83.3767% ( 97) 00:07:28.751 8368.443 - 8418.855: 84.0092% ( 102) 00:07:28.751 8418.855 - 8469.268: 84.6540% ( 104) 00:07:28.751 8469.268 - 8519.680: 85.0260% ( 60) 00:07:28.752 8519.680 - 8570.092: 85.3485% ( 52) 00:07:28.752 8570.092 - 8620.505: 85.6771% ( 53) 00:07:28.752 8620.505 - 8670.917: 86.0801% ( 65) 00:07:28.752 8670.917 - 8721.329: 86.5079% ( 69) 00:07:28.752 8721.329 - 8771.742: 86.8924% ( 62) 00:07:28.752 8771.742 - 8822.154: 87.2334% ( 55) 00:07:28.752 8822.154 - 8872.566: 87.5434% ( 50) 00:07:28.752 8872.566 - 8922.978: 87.8410% ( 48) 00:07:28.752 8922.978 - 8973.391: 88.2378% ( 64) 00:07:28.752 8973.391 - 9023.803: 88.5975% ( 58) 00:07:28.752 9023.803 - 9074.215: 88.8703% ( 44) 00:07:28.752 9074.215 - 9124.628: 89.1121% ( 39) 00:07:28.752 9124.628 - 9175.040: 89.3291% ( 35) 00:07:28.752 9175.040 - 9225.452: 89.6143% ( 46) 00:07:28.752 9225.452 - 9275.865: 89.8996% ( 46) 00:07:28.752 9275.865 - 9326.277: 90.2158% ( 51) 00:07:28.752 9326.277 - 9376.689: 90.4948% ( 45) 00:07:28.752 9376.689 - 9427.102: 90.7614% ( 43) 00:07:28.752 9427.102 - 9477.514: 90.8792% ( 19) 00:07:28.752 9477.514 - 9527.926: 91.0218% ( 23) 00:07:28.752 9527.926 - 9578.338: 91.1954% ( 28) 00:07:28.752 9578.338 - 9628.751: 91.4497% ( 41) 00:07:28.752 9628.751 - 9679.163: 91.7101% ( 42) 00:07:28.752 9679.163 - 9729.575: 91.8403% ( 21) 00:07:28.752 9729.575 - 9779.988: 91.9643% ( 20) 00:07:28.752 9779.988 - 9830.400: 92.0945% ( 21) 00:07:28.752 9830.400 - 9880.812: 92.1999% ( 17) 00:07:28.752 9880.812 - 9931.225: 92.3053% ( 17) 00:07:28.752 9931.225 - 9981.637: 92.4045% ( 16) 00:07:28.752 9981.637 - 10032.049: 92.5347% ( 21) 00:07:28.752 10032.049 - 10082.462: 92.7021% ( 27) 00:07:28.752 10082.462 - 10132.874: 92.8757% ( 28) 00:07:28.752 10132.874 - 10183.286: 93.0556% ( 29) 00:07:28.752 10183.286 - 10233.698: 93.2044% ( 24) 00:07:28.752 10233.698 - 10284.111: 93.4152% ( 34) 00:07:28.752 10284.111 - 10334.523: 93.6818% ( 43) 00:07:28.752 10334.523 - 10384.935: 93.8554% ( 28) 00:07:28.752 10384.935 - 10435.348: 93.9298% ( 12) 00:07:28.752 10435.348 - 10485.760: 93.9980% ( 11) 00:07:28.752 10485.760 - 10536.172: 94.0538% ( 9) 00:07:28.752 10536.172 - 10586.585: 94.1034% ( 8) 00:07:28.752 10586.585 - 10636.997: 94.1654% ( 10) 00:07:28.752 10636.997 - 10687.409: 94.2274% ( 10) 00:07:28.752 10687.409 - 10737.822: 94.2708% ( 7) 00:07:28.752 10737.822 - 10788.234: 94.3018% ( 5) 00:07:28.752 10788.234 - 10838.646: 94.3328% ( 5) 00:07:28.752 10838.646 - 10889.058: 94.3576% ( 4) 00:07:28.752 10889.058 - 10939.471: 94.3886% ( 5) 00:07:28.752 10939.471 - 10989.883: 94.4072% ( 3) 00:07:28.752 10989.883 - 11040.295: 94.4196% ( 2) 00:07:28.752 11040.295 - 11090.708: 94.4320% ( 2) 00:07:28.752 11090.708 - 11141.120: 94.4444% ( 2) 00:07:28.752 11141.120 - 11191.532: 94.4692% ( 4) 00:07:28.752 11191.532 - 11241.945: 94.4754% ( 1) 00:07:28.752 11241.945 - 11292.357: 94.5002% ( 4) 00:07:28.752 11292.357 - 11342.769: 94.5126% ( 2) 00:07:28.752 11342.769 - 11393.182: 94.5250% ( 2) 00:07:28.752 11393.182 - 11443.594: 94.5499% ( 4) 00:07:28.752 11443.594 - 11494.006: 94.5933% ( 7) 00:07:28.752 11494.006 - 11544.418: 94.6367% ( 7) 00:07:28.752 11544.418 - 11594.831: 94.7049% ( 11) 00:07:28.752 11594.831 - 11645.243: 94.7917% ( 14) 00:07:28.752 11645.243 - 11695.655: 94.9529% ( 26) 00:07:28.752 11695.655 - 11746.068: 95.0769% ( 20) 00:07:28.752 11746.068 - 11796.480: 95.2071% ( 21) 00:07:28.752 11796.480 - 11846.892: 95.3373% ( 21) 00:07:28.752 11846.892 - 11897.305: 95.4427% ( 17) 00:07:28.752 11897.305 - 11947.717: 95.5233% ( 13) 00:07:28.752 11947.717 - 11998.129: 95.6287% ( 17) 00:07:28.752 11998.129 - 12048.542: 95.8085% ( 29) 00:07:28.752 12048.542 - 12098.954: 95.8953% ( 14) 00:07:28.752 12098.954 - 12149.366: 95.9883% ( 15) 00:07:28.752 12149.366 - 12199.778: 96.1992% ( 34) 00:07:28.752 12199.778 - 12250.191: 96.2612% ( 10) 00:07:28.752 12250.191 - 12300.603: 96.3046% ( 7) 00:07:28.752 12300.603 - 12351.015: 96.3542% ( 8) 00:07:28.752 12351.015 - 12401.428: 96.3852% ( 5) 00:07:28.752 12401.428 - 12451.840: 96.4348% ( 8) 00:07:28.752 12451.840 - 12502.252: 96.4720% ( 6) 00:07:28.752 12502.252 - 12552.665: 96.4968% ( 4) 00:07:28.752 12552.665 - 12603.077: 96.5278% ( 5) 00:07:28.752 12603.077 - 12653.489: 96.5898% ( 10) 00:07:28.752 12653.489 - 12703.902: 96.6704% ( 13) 00:07:28.752 12703.902 - 12754.314: 96.7200% ( 8) 00:07:28.752 12754.314 - 12804.726: 96.7386% ( 3) 00:07:28.752 12804.726 - 12855.138: 96.7448% ( 1) 00:07:28.752 12855.138 - 12905.551: 96.7634% ( 3) 00:07:28.752 12905.551 - 13006.375: 96.7882% ( 4) 00:07:28.752 13006.375 - 13107.200: 96.8130% ( 4) 00:07:28.752 13107.200 - 13208.025: 96.8254% ( 2) 00:07:28.752 13208.025 - 13308.849: 96.8502% ( 4) 00:07:28.752 13308.849 - 13409.674: 96.9122% ( 10) 00:07:28.752 13409.674 - 13510.498: 96.9928% ( 13) 00:07:28.752 13510.498 - 13611.323: 97.1044% ( 18) 00:07:28.752 13611.323 - 13712.148: 97.2346% ( 21) 00:07:28.752 13712.148 - 13812.972: 97.3958% ( 26) 00:07:28.752 13812.972 - 13913.797: 97.5260% ( 21) 00:07:28.752 13913.797 - 14014.622: 97.6314% ( 17) 00:07:28.752 14014.622 - 14115.446: 97.6873% ( 9) 00:07:28.752 14115.446 - 14216.271: 97.7369% ( 8) 00:07:28.752 14216.271 - 14317.095: 97.7679% ( 5) 00:07:28.752 14317.095 - 14417.920: 97.7927% ( 4) 00:07:28.752 14417.920 - 14518.745: 97.8361% ( 7) 00:07:28.752 14518.745 - 14619.569: 97.8857% ( 8) 00:07:28.752 14619.569 - 14720.394: 97.9415% ( 9) 00:07:28.752 14720.394 - 14821.218: 98.0221% ( 13) 00:07:28.752 14821.218 - 14922.043: 98.1833% ( 26) 00:07:28.752 14922.043 - 15022.868: 98.2453% ( 10) 00:07:28.752 15022.868 - 15123.692: 98.2887% ( 7) 00:07:28.752 15123.692 - 15224.517: 98.3259% ( 6) 00:07:28.752 15224.517 - 15325.342: 98.3693% ( 7) 00:07:28.752 15325.342 - 15426.166: 98.4251% ( 9) 00:07:28.752 15426.166 - 15526.991: 98.5243% ( 16) 00:07:28.752 15526.991 - 15627.815: 98.6235% ( 16) 00:07:28.752 15627.815 - 15728.640: 98.7475% ( 20) 00:07:28.752 15728.640 - 15829.465: 98.8777% ( 21) 00:07:28.752 15829.465 - 15930.289: 98.9459% ( 11) 00:07:28.752 15930.289 - 16031.114: 99.0203% ( 12) 00:07:28.752 16031.114 - 16131.938: 99.0947% ( 12) 00:07:28.752 16131.938 - 16232.763: 99.1567% ( 10) 00:07:28.752 16232.763 - 16333.588: 99.2001% ( 7) 00:07:28.752 16333.588 - 16434.412: 99.2063% ( 1) 00:07:28.752 23492.135 - 23592.960: 99.2125% ( 1) 00:07:28.752 23592.960 - 23693.785: 99.2250% ( 2) 00:07:28.752 23693.785 - 23794.609: 99.2374% ( 2) 00:07:28.752 23794.609 - 23895.434: 99.2560% ( 3) 00:07:28.752 23895.434 - 23996.258: 99.2746% ( 3) 00:07:28.752 23996.258 - 24097.083: 99.2994% ( 4) 00:07:28.752 24097.083 - 24197.908: 99.3304% ( 5) 00:07:28.752 24197.908 - 24298.732: 99.3552% ( 4) 00:07:28.752 24298.732 - 24399.557: 99.3800% ( 4) 00:07:28.752 24399.557 - 24500.382: 99.4048% ( 4) 00:07:28.752 24500.382 - 24601.206: 99.4296% ( 4) 00:07:28.752 24601.206 - 24702.031: 99.4482% ( 3) 00:07:28.752 24702.031 - 24802.855: 99.4730% ( 4) 00:07:28.752 24802.855 - 24903.680: 99.4978% ( 4) 00:07:28.752 24903.680 - 25004.505: 99.5226% ( 4) 00:07:28.752 25004.505 - 25105.329: 99.5474% ( 4) 00:07:28.752 25105.329 - 25206.154: 99.5784% ( 5) 00:07:28.752 25206.154 - 25306.978: 99.6032% ( 4) 00:07:28.752 30852.332 - 31053.982: 99.6466% ( 7) 00:07:28.752 31053.982 - 31255.631: 99.7024% ( 9) 00:07:28.752 31255.631 - 31457.280: 99.7458% ( 7) 00:07:28.752 31457.280 - 31658.929: 99.7954% ( 8) 00:07:28.752 31658.929 - 31860.578: 99.8450% ( 8) 00:07:28.752 31860.578 - 32062.228: 99.9008% ( 9) 00:07:28.752 32062.228 - 32263.877: 99.9504% ( 8) 00:07:28.752 32263.877 - 32465.526: 100.0000% ( 8) 00:07:28.752 00:07:28.752 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:28.752 ============================================================================== 00:07:28.752 Range in us Cumulative IO count 00:07:28.752 6125.095 - 6150.302: 0.0062% ( 1) 00:07:28.752 6276.332 - 6301.538: 0.0185% ( 2) 00:07:28.752 6301.538 - 6326.745: 0.0432% ( 4) 00:07:28.752 6326.745 - 6351.951: 0.0803% ( 6) 00:07:28.752 6351.951 - 6377.157: 0.1482% ( 11) 00:07:28.752 6377.157 - 6402.363: 0.2532% ( 17) 00:07:28.752 6402.363 - 6427.569: 0.3953% ( 23) 00:07:28.752 6427.569 - 6452.775: 0.5311% ( 22) 00:07:28.752 6452.775 - 6503.188: 0.8770% ( 56) 00:07:28.752 6503.188 - 6553.600: 1.8466% ( 157) 00:07:28.752 6553.600 - 6604.012: 2.9088% ( 172) 00:07:28.752 6604.012 - 6654.425: 4.2614% ( 219) 00:07:28.752 6654.425 - 6704.837: 6.9911% ( 442) 00:07:28.752 6704.837 - 6755.249: 9.6900% ( 437) 00:07:28.752 6755.249 - 6805.662: 13.1299% ( 557) 00:07:28.752 6805.662 - 6856.074: 17.2184% ( 662) 00:07:28.752 6856.074 - 6906.486: 21.0968% ( 628) 00:07:28.752 6906.486 - 6956.898: 24.6294% ( 572) 00:07:28.752 6956.898 - 7007.311: 28.4214% ( 614) 00:07:28.752 7007.311 - 7057.723: 32.2196% ( 615) 00:07:28.752 7057.723 - 7108.135: 35.9560% ( 605) 00:07:28.752 7108.135 - 7158.548: 39.9518% ( 647) 00:07:28.752 7158.548 - 7208.960: 44.0958% ( 671) 00:07:28.752 7208.960 - 7259.372: 47.4617% ( 545) 00:07:28.752 7259.372 - 7309.785: 51.4081% ( 639) 00:07:28.752 7309.785 - 7360.197: 55.9659% ( 738) 00:07:28.752 7360.197 - 7410.609: 59.1835% ( 521) 00:07:28.752 7410.609 - 7461.022: 62.2097% ( 490) 00:07:28.752 7461.022 - 7511.434: 65.0259% ( 456) 00:07:28.752 7511.434 - 7561.846: 67.1751% ( 348) 00:07:28.752 7561.846 - 7612.258: 69.0650% ( 306) 00:07:28.752 7612.258 - 7662.671: 70.6583% ( 258) 00:07:28.752 7662.671 - 7713.083: 72.3258% ( 270) 00:07:28.752 7713.083 - 7763.495: 73.7216% ( 226) 00:07:28.752 7763.495 - 7813.908: 74.9629% ( 201) 00:07:28.752 7813.908 - 7864.320: 75.7843% ( 133) 00:07:28.752 7864.320 - 7914.732: 76.5440% ( 123) 00:07:28.752 7914.732 - 7965.145: 77.2604% ( 116) 00:07:28.753 7965.145 - 8015.557: 77.8656% ( 98) 00:07:28.753 8015.557 - 8065.969: 78.7611% ( 145) 00:07:28.753 8065.969 - 8116.382: 79.5640% ( 130) 00:07:28.753 8116.382 - 8166.794: 80.4471% ( 143) 00:07:28.753 8166.794 - 8217.206: 81.3303% ( 143) 00:07:28.753 8217.206 - 8267.618: 82.2443% ( 148) 00:07:28.753 8267.618 - 8318.031: 82.9422% ( 113) 00:07:28.753 8318.031 - 8368.443: 83.5907% ( 105) 00:07:28.753 8368.443 - 8418.855: 84.2762% ( 111) 00:07:28.753 8418.855 - 8469.268: 84.8938% ( 100) 00:07:28.753 8469.268 - 8519.680: 85.2829% ( 63) 00:07:28.753 8519.680 - 8570.092: 85.5978% ( 51) 00:07:28.753 8570.092 - 8620.505: 85.8881% ( 47) 00:07:28.753 8620.505 - 8670.917: 86.1907% ( 49) 00:07:28.753 8670.917 - 8721.329: 86.4995% ( 50) 00:07:28.753 8721.329 - 8771.742: 86.7465% ( 40) 00:07:28.753 8771.742 - 8822.154: 86.9874% ( 39) 00:07:28.753 8822.154 - 8872.566: 87.4753% ( 79) 00:07:28.753 8872.566 - 8922.978: 87.7594% ( 46) 00:07:28.753 8922.978 - 8973.391: 87.9879% ( 37) 00:07:28.753 8973.391 - 9023.803: 88.2535% ( 43) 00:07:28.753 9023.803 - 9074.215: 88.6549% ( 65) 00:07:28.753 9074.215 - 9124.628: 89.1490% ( 80) 00:07:28.753 9124.628 - 9175.040: 89.6430% ( 80) 00:07:28.753 9175.040 - 9225.452: 89.8715% ( 37) 00:07:28.753 9225.452 - 9275.865: 90.0753% ( 33) 00:07:28.753 9275.865 - 9326.277: 90.2236% ( 24) 00:07:28.753 9326.277 - 9376.689: 90.3903% ( 27) 00:07:28.753 9376.689 - 9427.102: 90.5694% ( 29) 00:07:28.753 9427.102 - 9477.514: 90.7176% ( 24) 00:07:28.753 9477.514 - 9527.926: 90.8844% ( 27) 00:07:28.753 9527.926 - 9578.338: 91.1623% ( 45) 00:07:28.753 9578.338 - 9628.751: 91.4340% ( 44) 00:07:28.753 9628.751 - 9679.163: 91.6378% ( 33) 00:07:28.753 9679.163 - 9729.575: 91.8046% ( 27) 00:07:28.753 9729.575 - 9779.988: 91.9219% ( 19) 00:07:28.753 9779.988 - 9830.400: 92.0208% ( 16) 00:07:28.753 9830.400 - 9880.812: 92.1134% ( 15) 00:07:28.753 9880.812 - 9931.225: 92.2493% ( 22) 00:07:28.753 9931.225 - 9981.637: 92.3604% ( 18) 00:07:28.753 9981.637 - 10032.049: 92.4592% ( 16) 00:07:28.753 10032.049 - 10082.462: 92.5766% ( 19) 00:07:28.753 10082.462 - 10132.874: 92.6507% ( 12) 00:07:28.753 10132.874 - 10183.286: 92.7001% ( 8) 00:07:28.753 10183.286 - 10233.698: 92.7495% ( 8) 00:07:28.753 10233.698 - 10284.111: 92.8360% ( 14) 00:07:28.753 10284.111 - 10334.523: 92.9163% ( 13) 00:07:28.753 10334.523 - 10384.935: 93.0151% ( 16) 00:07:28.753 10384.935 - 10435.348: 93.2065% ( 31) 00:07:28.753 10435.348 - 10485.760: 93.4412% ( 38) 00:07:28.753 10485.760 - 10536.172: 93.4968% ( 9) 00:07:28.753 10536.172 - 10586.585: 93.5462% ( 8) 00:07:28.753 10586.585 - 10636.997: 93.5833% ( 6) 00:07:28.753 10636.997 - 10687.409: 93.6141% ( 5) 00:07:28.753 10687.409 - 10737.822: 93.6450% ( 5) 00:07:28.753 10737.822 - 10788.234: 93.6697% ( 4) 00:07:28.753 10788.234 - 10838.646: 93.6759% ( 1) 00:07:28.753 10889.058 - 10939.471: 93.7006% ( 4) 00:07:28.753 10939.471 - 10989.883: 93.7253% ( 4) 00:07:28.753 10989.883 - 11040.295: 93.7562% ( 5) 00:07:28.753 11040.295 - 11090.708: 93.8241% ( 11) 00:07:28.753 11090.708 - 11141.120: 93.8982% ( 12) 00:07:28.753 11141.120 - 11191.532: 93.9723% ( 12) 00:07:28.753 11191.532 - 11241.945: 94.0403% ( 11) 00:07:28.753 11241.945 - 11292.357: 94.0897% ( 8) 00:07:28.753 11292.357 - 11342.769: 94.1514% ( 10) 00:07:28.753 11342.769 - 11393.182: 94.1947% ( 7) 00:07:28.753 11393.182 - 11443.594: 94.2626% ( 11) 00:07:28.753 11443.594 - 11494.006: 94.3367% ( 12) 00:07:28.753 11494.006 - 11544.418: 94.4108% ( 12) 00:07:28.753 11544.418 - 11594.831: 94.5220% ( 18) 00:07:28.753 11594.831 - 11645.243: 94.7443% ( 36) 00:07:28.753 11645.243 - 11695.655: 94.8864% ( 23) 00:07:28.753 11695.655 - 11746.068: 95.0346% ( 24) 00:07:28.753 11746.068 - 11796.480: 95.1025% ( 11) 00:07:28.753 11796.480 - 11846.892: 95.1766% ( 12) 00:07:28.753 11846.892 - 11897.305: 95.2384% ( 10) 00:07:28.753 11897.305 - 11947.717: 95.2507% ( 2) 00:07:28.753 11947.717 - 11998.129: 95.2569% ( 1) 00:07:28.753 12048.542 - 12098.954: 95.2631% ( 1) 00:07:28.753 12098.954 - 12149.366: 95.2878% ( 4) 00:07:28.753 12149.366 - 12199.778: 95.3187% ( 5) 00:07:28.753 12199.778 - 12250.191: 95.3804% ( 10) 00:07:28.753 12250.191 - 12300.603: 95.4792% ( 16) 00:07:28.753 12300.603 - 12351.015: 95.5904% ( 18) 00:07:28.753 12351.015 - 12401.428: 95.6954% ( 17) 00:07:28.753 12401.428 - 12451.840: 95.8066% ( 18) 00:07:28.753 12451.840 - 12502.252: 95.9486% ( 23) 00:07:28.753 12502.252 - 12552.665: 96.0907% ( 23) 00:07:28.753 12552.665 - 12603.077: 96.1833% ( 15) 00:07:28.753 12603.077 - 12653.489: 96.2945% ( 18) 00:07:28.753 12653.489 - 12703.902: 96.4736% ( 29) 00:07:28.753 12703.902 - 12754.314: 96.5415% ( 11) 00:07:28.753 12754.314 - 12804.726: 96.5971% ( 9) 00:07:28.753 12804.726 - 12855.138: 96.6465% ( 8) 00:07:28.753 12855.138 - 12905.551: 96.6774% ( 5) 00:07:28.753 12905.551 - 13006.375: 96.7577% ( 13) 00:07:28.753 13006.375 - 13107.200: 96.8071% ( 8) 00:07:28.753 13107.200 - 13208.025: 96.8379% ( 5) 00:07:28.753 13712.148 - 13812.972: 96.8565% ( 3) 00:07:28.753 13812.972 - 13913.797: 96.9059% ( 8) 00:07:28.753 13913.797 - 14014.622: 96.9985% ( 15) 00:07:28.753 14014.622 - 14115.446: 97.0912% ( 15) 00:07:28.753 14115.446 - 14216.271: 97.2208% ( 21) 00:07:28.753 14216.271 - 14317.095: 97.3938% ( 28) 00:07:28.753 14317.095 - 14417.920: 97.5667% ( 28) 00:07:28.753 14417.920 - 14518.745: 97.7458% ( 29) 00:07:28.753 14518.745 - 14619.569: 97.9187% ( 28) 00:07:28.753 14619.569 - 14720.394: 98.0546% ( 22) 00:07:28.753 14720.394 - 14821.218: 98.2090% ( 25) 00:07:28.753 14821.218 - 14922.043: 98.4313% ( 36) 00:07:28.753 14922.043 - 15022.868: 98.8328% ( 65) 00:07:28.753 15022.868 - 15123.692: 99.0180% ( 30) 00:07:28.753 15123.692 - 15224.517: 99.1292% ( 18) 00:07:28.753 15224.517 - 15325.342: 99.1786% ( 8) 00:07:28.753 15325.342 - 15426.166: 99.2280% ( 8) 00:07:28.753 15426.166 - 15526.991: 99.2651% ( 6) 00:07:28.753 15526.991 - 15627.815: 99.2898% ( 4) 00:07:28.753 15627.815 - 15728.640: 99.3207% ( 5) 00:07:28.753 15728.640 - 15829.465: 99.3454% ( 4) 00:07:28.753 15829.465 - 15930.289: 99.3701% ( 4) 00:07:28.753 15930.289 - 16031.114: 99.3948% ( 4) 00:07:28.753 16031.114 - 16131.938: 99.4195% ( 4) 00:07:28.753 16131.938 - 16232.763: 99.4503% ( 5) 00:07:28.753 16232.763 - 16333.588: 99.4750% ( 4) 00:07:28.753 16333.588 - 16434.412: 99.4998% ( 4) 00:07:28.753 16434.412 - 16535.237: 99.5245% ( 4) 00:07:28.753 16535.237 - 16636.062: 99.5492% ( 4) 00:07:28.753 16636.062 - 16736.886: 99.5739% ( 4) 00:07:28.753 16736.886 - 16837.711: 99.5986% ( 4) 00:07:28.753 16837.711 - 16938.535: 99.6047% ( 1) 00:07:28.753 23088.837 - 23189.662: 99.6233% ( 3) 00:07:28.753 23189.662 - 23290.486: 99.6480% ( 4) 00:07:28.753 23290.486 - 23391.311: 99.6727% ( 4) 00:07:28.753 23391.311 - 23492.135: 99.6974% ( 4) 00:07:28.753 23492.135 - 23592.960: 99.7221% ( 4) 00:07:28.753 23592.960 - 23693.785: 99.7468% ( 4) 00:07:28.753 23693.785 - 23794.609: 99.7715% ( 4) 00:07:28.753 23794.609 - 23895.434: 99.7962% ( 4) 00:07:28.753 23895.434 - 23996.258: 99.8209% ( 4) 00:07:28.753 23996.258 - 24097.083: 99.8456% ( 4) 00:07:28.753 24097.083 - 24197.908: 99.8703% ( 4) 00:07:28.753 24197.908 - 24298.732: 99.9012% ( 5) 00:07:28.753 24298.732 - 24399.557: 99.9259% ( 4) 00:07:28.753 24399.557 - 24500.382: 99.9506% ( 4) 00:07:28.753 24500.382 - 24601.206: 99.9753% ( 4) 00:07:28.753 24601.206 - 24702.031: 100.0000% ( 4) 00:07:28.753 00:07:28.753 ************************************ 00:07:28.753 END TEST nvme_perf 00:07:28.753 ************************************ 00:07:28.753 07:39:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:28.753 00:07:28.753 real 0m2.498s 00:07:28.753 user 0m2.195s 00:07:28.753 sys 0m0.200s 00:07:28.753 07:39:18 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.753 07:39:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.011 07:39:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:29.011 07:39:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:29.011 07:39:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.011 07:39:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.011 ************************************ 00:07:29.011 START TEST nvme_hello_world 00:07:29.011 ************************************ 00:07:29.011 07:39:18 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:29.011 Initializing NVMe Controllers 00:07:29.011 Attached to 0000:00:11.0 00:07:29.011 Namespace ID: 1 size: 5GB 00:07:29.011 Attached to 0000:00:13.0 00:07:29.011 Namespace ID: 1 size: 1GB 00:07:29.011 Attached to 0000:00:10.0 00:07:29.011 Namespace ID: 1 size: 6GB 00:07:29.011 Attached to 0000:00:12.0 00:07:29.011 Namespace ID: 1 size: 4GB 00:07:29.011 Namespace ID: 2 size: 4GB 00:07:29.011 Namespace ID: 3 size: 4GB 00:07:29.011 Initialization complete. 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 INFO: using host memory buffer for IO 00:07:29.011 Hello world! 00:07:29.011 ************************************ 00:07:29.011 END TEST nvme_hello_world 00:07:29.011 ************************************ 00:07:29.011 00:07:29.011 real 0m0.226s 00:07:29.011 user 0m0.083s 00:07:29.011 sys 0m0.097s 00:07:29.012 07:39:18 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.012 07:39:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:29.270 07:39:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:29.270 07:39:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.270 07:39:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.270 07:39:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.270 ************************************ 00:07:29.270 START TEST nvme_sgl 00:07:29.270 ************************************ 00:07:29.270 07:39:18 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:29.270 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:29.270 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:29.270 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:29.270 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:29.270 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:29.270 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:29.270 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:29.270 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:29.270 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:29.270 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:29.529 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:29.529 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:29.529 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:29.529 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:29.529 NVMe Readv/Writev Request test 00:07:29.529 Attached to 0000:00:11.0 00:07:29.529 Attached to 0000:00:13.0 00:07:29.529 Attached to 0000:00:10.0 00:07:29.529 Attached to 0000:00:12.0 00:07:29.529 0000:00:11.0: build_io_request_2 test passed 00:07:29.529 0000:00:11.0: build_io_request_4 test passed 00:07:29.529 0000:00:11.0: build_io_request_5 test passed 00:07:29.529 0000:00:11.0: build_io_request_6 test passed 00:07:29.529 0000:00:11.0: build_io_request_7 test passed 00:07:29.529 0000:00:11.0: build_io_request_10 test passed 00:07:29.529 0000:00:10.0: build_io_request_2 test passed 00:07:29.529 0000:00:10.0: build_io_request_4 test passed 00:07:29.529 0000:00:10.0: build_io_request_5 test passed 00:07:29.529 0000:00:10.0: build_io_request_6 test passed 00:07:29.529 0000:00:10.0: build_io_request_7 test passed 00:07:29.529 0000:00:10.0: build_io_request_10 test passed 00:07:29.529 Cleaning up... 00:07:29.529 ************************************ 00:07:29.529 END TEST nvme_sgl 00:07:29.529 ************************************ 00:07:29.529 00:07:29.529 real 0m0.296s 00:07:29.529 user 0m0.144s 00:07:29.529 sys 0m0.098s 00:07:29.529 07:39:19 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.529 07:39:19 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:29.529 07:39:19 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:29.529 07:39:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.529 07:39:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.529 07:39:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.529 ************************************ 00:07:29.529 START TEST nvme_e2edp 00:07:29.529 ************************************ 00:07:29.529 07:39:19 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:29.787 NVMe Write/Read with End-to-End data protection test 00:07:29.787 Attached to 0000:00:11.0 00:07:29.787 Attached to 0000:00:13.0 00:07:29.787 Attached to 0000:00:10.0 00:07:29.787 Attached to 0000:00:12.0 00:07:29.787 Cleaning up... 00:07:29.787 ************************************ 00:07:29.787 END TEST nvme_e2edp 00:07:29.787 ************************************ 00:07:29.787 00:07:29.787 real 0m0.207s 00:07:29.787 user 0m0.068s 00:07:29.787 sys 0m0.095s 00:07:29.787 07:39:19 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.787 07:39:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:29.787 07:39:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:29.787 07:39:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.787 07:39:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.787 07:39:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.787 ************************************ 00:07:29.787 START TEST nvme_reserve 00:07:29.787 ************************************ 00:07:29.787 07:39:19 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:30.045 ===================================================== 00:07:30.045 NVMe Controller at PCI bus 0, device 17, function 0 00:07:30.045 ===================================================== 00:07:30.045 Reservations: Not Supported 00:07:30.045 ===================================================== 00:07:30.045 NVMe Controller at PCI bus 0, device 19, function 0 00:07:30.045 ===================================================== 00:07:30.045 Reservations: Not Supported 00:07:30.045 ===================================================== 00:07:30.045 NVMe Controller at PCI bus 0, device 16, function 0 00:07:30.045 ===================================================== 00:07:30.045 Reservations: Not Supported 00:07:30.045 ===================================================== 00:07:30.045 NVMe Controller at PCI bus 0, device 18, function 0 00:07:30.045 ===================================================== 00:07:30.045 Reservations: Not Supported 00:07:30.045 Reservation test passed 00:07:30.045 ************************************ 00:07:30.045 END TEST nvme_reserve 00:07:30.045 ************************************ 00:07:30.045 00:07:30.045 real 0m0.207s 00:07:30.045 user 0m0.084s 00:07:30.045 sys 0m0.083s 00:07:30.045 07:39:19 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.045 07:39:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:30.045 07:39:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:30.045 07:39:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.045 07:39:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.045 07:39:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.045 ************************************ 00:07:30.045 START TEST nvme_err_injection 00:07:30.045 ************************************ 00:07:30.045 07:39:19 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:30.305 NVMe Error Injection test 00:07:30.305 Attached to 0000:00:11.0 00:07:30.305 Attached to 0000:00:13.0 00:07:30.305 Attached to 0000:00:10.0 00:07:30.305 Attached to 0000:00:12.0 00:07:30.305 0000:00:13.0: get features failed as expected 00:07:30.305 0000:00:10.0: get features failed as expected 00:07:30.305 0000:00:12.0: get features failed as expected 00:07:30.305 0000:00:11.0: get features failed as expected 00:07:30.305 0000:00:11.0: get features successfully as expected 00:07:30.305 0000:00:13.0: get features successfully as expected 00:07:30.305 0000:00:10.0: get features successfully as expected 00:07:30.305 0000:00:12.0: get features successfully as expected 00:07:30.305 0000:00:11.0: read failed as expected 00:07:30.305 0000:00:13.0: read failed as expected 00:07:30.305 0000:00:10.0: read failed as expected 00:07:30.305 0000:00:12.0: read failed as expected 00:07:30.305 0000:00:11.0: read successfully as expected 00:07:30.305 0000:00:13.0: read successfully as expected 00:07:30.305 0000:00:10.0: read successfully as expected 00:07:30.305 0000:00:12.0: read successfully as expected 00:07:30.305 Cleaning up... 00:07:30.305 ************************************ 00:07:30.305 END TEST nvme_err_injection 00:07:30.305 ************************************ 00:07:30.305 00:07:30.305 real 0m0.214s 00:07:30.305 user 0m0.076s 00:07:30.305 sys 0m0.097s 00:07:30.305 07:39:19 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.305 07:39:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:30.305 07:39:20 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:30.305 07:39:20 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:30.305 07:39:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.305 07:39:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.305 ************************************ 00:07:30.305 START TEST nvme_overhead 00:07:30.305 ************************************ 00:07:30.305 07:39:20 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:31.681 Initializing NVMe Controllers 00:07:31.681 Attached to 0000:00:11.0 00:07:31.681 Attached to 0000:00:13.0 00:07:31.681 Attached to 0000:00:10.0 00:07:31.681 Attached to 0000:00:12.0 00:07:31.681 Initialization complete. Launching workers. 00:07:31.681 submit (in ns) avg, min, max = 11787.4, 10212.3, 297080.8 00:07:31.681 complete (in ns) avg, min, max = 7832.2, 7159.2, 262586.2 00:07:31.681 00:07:31.682 Submit histogram 00:07:31.682 ================ 00:07:31.682 Range in us Cumulative Count 00:07:31.682 10.191 - 10.240: 0.0056% ( 1) 00:07:31.682 10.240 - 10.289: 0.0112% ( 1) 00:07:31.682 10.289 - 10.338: 0.0168% ( 1) 00:07:31.682 10.338 - 10.388: 0.0225% ( 1) 00:07:31.682 10.880 - 10.929: 0.0618% ( 7) 00:07:31.682 10.929 - 10.978: 0.3763% ( 56) 00:07:31.682 10.978 - 11.028: 1.3871% ( 180) 00:07:31.682 11.028 - 11.077: 4.3915% ( 535) 00:07:31.682 11.077 - 11.126: 10.6643% ( 1117) 00:07:31.682 11.126 - 11.175: 21.1771% ( 1872) 00:07:31.682 11.175 - 11.225: 35.4130% ( 2535) 00:07:31.682 11.225 - 11.274: 51.0642% ( 2787) 00:07:31.682 11.274 - 11.323: 65.5697% ( 2583) 00:07:31.682 11.323 - 11.372: 76.0937% ( 1874) 00:07:31.682 11.372 - 11.422: 82.5350% ( 1147) 00:07:31.682 11.422 - 11.471: 86.3930% ( 687) 00:07:31.682 11.471 - 11.520: 88.2293% ( 327) 00:07:31.682 11.520 - 11.569: 89.2514% ( 182) 00:07:31.682 11.569 - 11.618: 89.7793% ( 94) 00:07:31.682 11.618 - 11.668: 90.1050% ( 58) 00:07:31.682 11.668 - 11.717: 90.3690% ( 47) 00:07:31.682 11.717 - 11.766: 90.5487% ( 32) 00:07:31.682 11.766 - 11.815: 90.7227% ( 31) 00:07:31.682 11.815 - 11.865: 90.8968% ( 31) 00:07:31.682 11.865 - 11.914: 91.0204% ( 22) 00:07:31.682 11.914 - 11.963: 91.1046% ( 15) 00:07:31.682 11.963 - 12.012: 91.2113% ( 19) 00:07:31.682 12.012 - 12.062: 91.3854% ( 31) 00:07:31.682 12.062 - 12.111: 91.5988% ( 38) 00:07:31.682 12.111 - 12.160: 91.7841% ( 33) 00:07:31.682 12.160 - 12.209: 91.9695% ( 33) 00:07:31.682 12.209 - 12.258: 92.1660% ( 35) 00:07:31.682 12.258 - 12.308: 92.2952% ( 23) 00:07:31.682 12.308 - 12.357: 92.4243% ( 23) 00:07:31.682 12.357 - 12.406: 92.4973% ( 13) 00:07:31.682 12.406 - 12.455: 92.6040% ( 19) 00:07:31.682 12.455 - 12.505: 92.6883% ( 15) 00:07:31.682 12.505 - 12.554: 92.7444% ( 10) 00:07:31.682 12.554 - 12.603: 92.7725% ( 5) 00:07:31.682 12.603 - 12.702: 92.8455% ( 13) 00:07:31.682 12.702 - 12.800: 92.8792% ( 6) 00:07:31.682 12.800 - 12.898: 92.9073% ( 5) 00:07:31.682 12.898 - 12.997: 92.9410% ( 6) 00:07:31.682 12.997 - 13.095: 93.0308% ( 16) 00:07:31.682 13.095 - 13.194: 93.0982% ( 12) 00:07:31.682 13.194 - 13.292: 93.2049% ( 19) 00:07:31.682 13.292 - 13.391: 93.3229% ( 21) 00:07:31.682 13.391 - 13.489: 93.4239% ( 18) 00:07:31.682 13.489 - 13.588: 93.5362% ( 20) 00:07:31.682 13.588 - 13.686: 93.5980% ( 11) 00:07:31.682 13.686 - 13.785: 93.6373% ( 7) 00:07:31.682 13.785 - 13.883: 93.6991% ( 11) 00:07:31.682 13.883 - 13.982: 93.7496% ( 9) 00:07:31.682 13.982 - 14.080: 93.8002% ( 9) 00:07:31.682 14.080 - 14.178: 93.8395% ( 7) 00:07:31.682 14.178 - 14.277: 93.8732% ( 6) 00:07:31.682 14.277 - 14.375: 93.9294% ( 10) 00:07:31.682 14.375 - 14.474: 94.0192% ( 16) 00:07:31.682 14.474 - 14.572: 94.1821% ( 29) 00:07:31.682 14.572 - 14.671: 94.3112% ( 23) 00:07:31.682 14.671 - 14.769: 94.5471% ( 42) 00:07:31.682 14.769 - 14.868: 94.8054% ( 46) 00:07:31.682 14.868 - 14.966: 95.2041% ( 71) 00:07:31.682 14.966 - 15.065: 95.5186% ( 56) 00:07:31.682 15.065 - 15.163: 95.7826% ( 47) 00:07:31.682 15.163 - 15.262: 96.0633% ( 50) 00:07:31.682 15.262 - 15.360: 96.4003% ( 60) 00:07:31.682 15.360 - 15.458: 96.6586% ( 46) 00:07:31.682 15.458 - 15.557: 96.8102% ( 27) 00:07:31.682 15.557 - 15.655: 96.9731% ( 29) 00:07:31.682 15.655 - 15.754: 97.1079% ( 24) 00:07:31.682 15.754 - 15.852: 97.2202% ( 20) 00:07:31.682 15.852 - 15.951: 97.3381% ( 21) 00:07:31.682 15.951 - 16.049: 97.4448% ( 19) 00:07:31.682 16.049 - 16.148: 97.5347% ( 16) 00:07:31.682 16.148 - 16.246: 97.6133% ( 14) 00:07:31.682 16.246 - 16.345: 97.6358% ( 4) 00:07:31.682 16.345 - 16.443: 97.6638% ( 5) 00:07:31.682 16.443 - 16.542: 97.7088% ( 8) 00:07:31.682 16.542 - 16.640: 97.7368% ( 5) 00:07:31.682 16.640 - 16.738: 97.7649% ( 5) 00:07:31.682 16.738 - 16.837: 97.8267% ( 11) 00:07:31.682 16.837 - 16.935: 97.8660% ( 7) 00:07:31.682 16.935 - 17.034: 97.9446% ( 14) 00:07:31.682 17.034 - 17.132: 98.0457% ( 18) 00:07:31.682 17.132 - 17.231: 98.1356% ( 16) 00:07:31.682 17.231 - 17.329: 98.1693% ( 6) 00:07:31.682 17.329 - 17.428: 98.2423% ( 13) 00:07:31.682 17.428 - 17.526: 98.3040% ( 11) 00:07:31.682 17.526 - 17.625: 98.3433% ( 7) 00:07:31.682 17.625 - 17.723: 98.4388% ( 17) 00:07:31.682 17.723 - 17.822: 98.5174% ( 14) 00:07:31.682 17.822 - 17.920: 98.5792% ( 11) 00:07:31.682 17.920 - 18.018: 98.6298% ( 9) 00:07:31.682 18.018 - 18.117: 98.6971% ( 12) 00:07:31.682 18.117 - 18.215: 98.7814% ( 15) 00:07:31.682 18.215 - 18.314: 98.8319% ( 9) 00:07:31.682 18.314 - 18.412: 98.8544% ( 4) 00:07:31.682 18.412 - 18.511: 98.8712% ( 3) 00:07:31.682 18.511 - 18.609: 98.8881% ( 3) 00:07:31.682 18.708 - 18.806: 98.8937% ( 1) 00:07:31.682 18.806 - 18.905: 98.9049% ( 2) 00:07:31.682 18.905 - 19.003: 98.9105% ( 1) 00:07:31.682 19.003 - 19.102: 98.9274% ( 3) 00:07:31.682 19.102 - 19.200: 98.9330% ( 1) 00:07:31.682 19.200 - 19.298: 98.9386% ( 1) 00:07:31.682 19.495 - 19.594: 98.9442% ( 1) 00:07:31.682 19.594 - 19.692: 98.9499% ( 1) 00:07:31.682 19.692 - 19.791: 98.9555% ( 1) 00:07:31.682 19.889 - 19.988: 98.9611% ( 1) 00:07:31.682 20.185 - 20.283: 98.9667% ( 1) 00:07:31.682 20.283 - 20.382: 98.9723% ( 1) 00:07:31.682 20.382 - 20.480: 98.9892% ( 3) 00:07:31.682 20.578 - 20.677: 99.0004% ( 2) 00:07:31.682 20.677 - 20.775: 99.0116% ( 2) 00:07:31.682 20.874 - 20.972: 99.0229% ( 2) 00:07:31.682 20.972 - 21.071: 99.0285% ( 1) 00:07:31.682 21.366 - 21.465: 99.0397% ( 2) 00:07:31.682 21.760 - 21.858: 99.0509% ( 2) 00:07:31.682 21.957 - 22.055: 99.0622% ( 2) 00:07:31.682 22.154 - 22.252: 99.0678% ( 1) 00:07:31.682 22.351 - 22.449: 99.0734% ( 1) 00:07:31.682 22.449 - 22.548: 99.0790% ( 1) 00:07:31.682 22.745 - 22.843: 99.0846% ( 1) 00:07:31.682 23.138 - 23.237: 99.0902% ( 1) 00:07:31.682 23.237 - 23.335: 99.0959% ( 1) 00:07:31.682 23.335 - 23.434: 99.1015% ( 1) 00:07:31.682 23.434 - 23.532: 99.1071% ( 1) 00:07:31.682 24.911 - 25.009: 99.1127% ( 1) 00:07:31.682 25.108 - 25.206: 99.1239% ( 2) 00:07:31.682 25.403 - 25.600: 99.1296% ( 1) 00:07:31.682 25.797 - 25.994: 99.1352% ( 1) 00:07:31.682 26.388 - 26.585: 99.1408% ( 1) 00:07:31.682 26.978 - 27.175: 99.1520% ( 2) 00:07:31.682 27.372 - 27.569: 99.1576% ( 1) 00:07:31.682 27.766 - 27.963: 99.1745% ( 3) 00:07:31.682 27.963 - 28.160: 99.1913% ( 3) 00:07:31.682 28.160 - 28.357: 99.2194% ( 5) 00:07:31.682 28.357 - 28.554: 99.2419% ( 4) 00:07:31.682 28.948 - 29.145: 99.2475% ( 1) 00:07:31.682 29.145 - 29.342: 99.2531% ( 1) 00:07:31.682 29.342 - 29.538: 99.2587% ( 1) 00:07:31.682 30.326 - 30.523: 99.2643% ( 1) 00:07:31.682 30.917 - 31.114: 99.2700% ( 1) 00:07:31.682 31.114 - 31.311: 99.3093% ( 7) 00:07:31.682 31.311 - 31.508: 99.5002% ( 34) 00:07:31.682 31.508 - 31.705: 99.6406% ( 25) 00:07:31.682 31.705 - 31.902: 99.7248% ( 15) 00:07:31.682 31.902 - 32.098: 99.7810% ( 10) 00:07:31.682 32.492 - 32.689: 99.7866% ( 1) 00:07:31.682 32.689 - 32.886: 99.7922% ( 1) 00:07:31.682 32.886 - 33.083: 99.7978% ( 1) 00:07:31.682 33.083 - 33.280: 99.8034% ( 1) 00:07:31.682 33.674 - 33.871: 99.8091% ( 1) 00:07:31.682 35.446 - 35.643: 99.8147% ( 1) 00:07:31.682 38.006 - 38.203: 99.8203% ( 1) 00:07:31.682 38.400 - 38.597: 99.8259% ( 1) 00:07:31.682 38.597 - 38.794: 99.8315% ( 1) 00:07:31.682 43.520 - 43.717: 99.8371% ( 1) 00:07:31.682 43.717 - 43.914: 99.8428% ( 1) 00:07:31.682 44.111 - 44.308: 99.8484% ( 1) 00:07:31.682 44.505 - 44.702: 99.8540% ( 1) 00:07:31.682 44.702 - 44.898: 99.8652% ( 2) 00:07:31.682 44.898 - 45.095: 99.8821% ( 3) 00:07:31.682 45.292 - 45.489: 99.8933% ( 2) 00:07:31.682 45.686 - 45.883: 99.8989% ( 1) 00:07:31.682 46.080 - 46.277: 99.9045% ( 1) 00:07:31.682 46.671 - 46.868: 99.9101% ( 1) 00:07:31.682 47.655 - 47.852: 99.9158% ( 1) 00:07:31.682 48.246 - 48.443: 99.9214% ( 1) 00:07:31.682 48.640 - 48.837: 99.9270% ( 1) 00:07:31.682 49.625 - 49.822: 99.9382% ( 2) 00:07:31.682 50.412 - 50.806: 99.9438% ( 1) 00:07:31.682 51.200 - 51.594: 99.9495% ( 1) 00:07:31.682 51.594 - 51.988: 99.9607% ( 2) 00:07:31.682 54.745 - 55.138: 99.9663% ( 1) 00:07:31.682 59.077 - 59.471: 99.9719% ( 1) 00:07:31.682 66.166 - 66.560: 99.9775% ( 1) 00:07:31.682 74.043 - 74.437: 99.9832% ( 1) 00:07:31.682 96.886 - 97.280: 99.9888% ( 1) 00:07:31.682 285.145 - 286.720: 99.9944% ( 1) 00:07:31.682 296.172 - 297.748: 100.0000% ( 1) 00:07:31.682 00:07:31.683 Complete histogram 00:07:31.683 ================== 00:07:31.683 Range in us Cumulative Count 00:07:31.683 7.138 - 7.188: 0.0056% ( 1) 00:07:31.683 7.188 - 7.237: 0.2190% ( 38) 00:07:31.683 7.237 - 7.286: 2.1901% ( 351) 00:07:31.683 7.286 - 7.335: 9.0470% ( 1221) 00:07:31.683 7.335 - 7.385: 20.2448% ( 1994) 00:07:31.683 7.385 - 7.434: 36.2329% ( 2847) 00:07:31.683 7.434 - 7.483: 55.8432% ( 3492) 00:07:31.683 7.483 - 7.532: 72.8702% ( 3032) 00:07:31.683 7.532 - 7.582: 83.6469% ( 1919) 00:07:31.683 7.582 - 7.631: 88.9706% ( 948) 00:07:31.683 7.631 - 7.680: 91.2450% ( 405) 00:07:31.683 7.680 - 7.729: 92.0425% ( 142) 00:07:31.683 7.729 - 7.778: 92.4973% ( 81) 00:07:31.683 7.778 - 7.828: 92.7163% ( 39) 00:07:31.683 7.828 - 7.877: 92.8904% ( 31) 00:07:31.683 7.877 - 7.926: 92.9410% ( 9) 00:07:31.683 7.926 - 7.975: 92.9915% ( 9) 00:07:31.683 7.975 - 8.025: 93.1151% ( 22) 00:07:31.683 8.025 - 8.074: 93.2162% ( 18) 00:07:31.683 8.074 - 8.123: 93.3341% ( 21) 00:07:31.683 8.123 - 8.172: 93.4352% ( 18) 00:07:31.683 8.172 - 8.222: 93.5250% ( 16) 00:07:31.683 8.222 - 8.271: 93.5924% ( 12) 00:07:31.683 8.271 - 8.320: 93.6429% ( 9) 00:07:31.683 8.320 - 8.369: 93.7216% ( 14) 00:07:31.683 8.369 - 8.418: 93.8002% ( 14) 00:07:31.683 8.418 - 8.468: 93.8676% ( 12) 00:07:31.683 8.468 - 8.517: 93.9069% ( 7) 00:07:31.683 8.517 - 8.566: 93.9518% ( 8) 00:07:31.683 8.566 - 8.615: 93.9799% ( 5) 00:07:31.683 8.714 - 8.763: 93.9855% ( 1) 00:07:31.683 9.009 - 9.058: 93.9967% ( 2) 00:07:31.683 9.058 - 9.108: 94.0080% ( 2) 00:07:31.683 9.157 - 9.206: 94.0136% ( 1) 00:07:31.683 9.206 - 9.255: 94.0192% ( 1) 00:07:31.683 9.452 - 9.502: 94.0248% ( 1) 00:07:31.683 9.502 - 9.551: 94.0304% ( 1) 00:07:31.683 9.551 - 9.600: 94.0473% ( 3) 00:07:31.683 9.600 - 9.649: 94.0529% ( 1) 00:07:31.683 9.649 - 9.698: 94.0641% ( 2) 00:07:31.683 9.698 - 9.748: 94.0810% ( 3) 00:07:31.683 9.748 - 9.797: 94.1034% ( 4) 00:07:31.683 9.797 - 9.846: 94.1708% ( 12) 00:07:31.683 9.846 - 9.895: 94.1989% ( 5) 00:07:31.683 9.895 - 9.945: 94.2719% ( 13) 00:07:31.683 9.945 - 9.994: 94.4067% ( 24) 00:07:31.683 9.994 - 10.043: 94.5246% ( 21) 00:07:31.683 10.043 - 10.092: 94.6650% ( 25) 00:07:31.683 10.092 - 10.142: 94.8223% ( 28) 00:07:31.683 10.142 - 10.191: 95.0581% ( 42) 00:07:31.683 10.191 - 10.240: 95.2210% ( 29) 00:07:31.683 10.240 - 10.289: 95.4344% ( 38) 00:07:31.683 10.289 - 10.338: 95.6365% ( 36) 00:07:31.683 10.338 - 10.388: 95.8612% ( 40) 00:07:31.683 10.388 - 10.437: 96.0914% ( 41) 00:07:31.683 10.437 - 10.486: 96.3161% ( 40) 00:07:31.683 10.486 - 10.535: 96.5070% ( 34) 00:07:31.683 10.535 - 10.585: 96.6642% ( 28) 00:07:31.683 10.585 - 10.634: 96.8159% ( 27) 00:07:31.683 10.634 - 10.683: 96.9956% ( 32) 00:07:31.683 10.683 - 10.732: 97.1079% ( 20) 00:07:31.683 10.732 - 10.782: 97.2314% ( 22) 00:07:31.683 10.782 - 10.831: 97.3269% ( 17) 00:07:31.683 10.831 - 10.880: 97.3999% ( 13) 00:07:31.683 10.880 - 10.929: 97.4841% ( 15) 00:07:31.683 10.929 - 10.978: 97.5234% ( 7) 00:07:31.683 10.978 - 11.028: 97.5796% ( 10) 00:07:31.683 11.028 - 11.077: 97.6189% ( 7) 00:07:31.683 11.077 - 11.126: 97.6470% ( 5) 00:07:31.683 11.126 - 11.175: 97.6863% ( 7) 00:07:31.683 11.175 - 11.225: 97.7088% ( 4) 00:07:31.683 11.225 - 11.274: 97.7256% ( 3) 00:07:31.683 11.274 - 11.323: 97.7537% ( 5) 00:07:31.683 11.323 - 11.372: 97.7593% ( 1) 00:07:31.683 11.372 - 11.422: 97.7705% ( 2) 00:07:31.683 11.422 - 11.471: 97.7762% ( 1) 00:07:31.683 11.471 - 11.520: 97.7818% ( 1) 00:07:31.683 11.520 - 11.569: 97.7874% ( 1) 00:07:31.683 11.618 - 11.668: 97.7986% ( 2) 00:07:31.683 11.668 - 11.717: 97.8155% ( 3) 00:07:31.683 11.865 - 11.914: 97.8211% ( 1) 00:07:31.683 12.012 - 12.062: 97.8267% ( 1) 00:07:31.683 12.258 - 12.308: 97.8323% ( 1) 00:07:31.683 12.702 - 12.800: 97.8492% ( 3) 00:07:31.683 12.800 - 12.898: 97.8660% ( 3) 00:07:31.683 12.898 - 12.997: 97.8772% ( 2) 00:07:31.683 12.997 - 13.095: 97.9053% ( 5) 00:07:31.683 13.095 - 13.194: 97.9671% ( 11) 00:07:31.683 13.194 - 13.292: 98.0232% ( 10) 00:07:31.683 13.292 - 13.391: 98.1075% ( 15) 00:07:31.683 13.391 - 13.489: 98.1861% ( 14) 00:07:31.683 13.489 - 13.588: 98.2423% ( 10) 00:07:31.683 13.588 - 13.686: 98.2928% ( 9) 00:07:31.683 13.686 - 13.785: 98.3602% ( 12) 00:07:31.683 13.785 - 13.883: 98.4444% ( 15) 00:07:31.683 13.883 - 13.982: 98.4950% ( 9) 00:07:31.683 13.982 - 14.080: 98.5904% ( 17) 00:07:31.683 14.080 - 14.178: 98.6185% ( 5) 00:07:31.683 14.178 - 14.277: 98.6691% ( 9) 00:07:31.683 14.277 - 14.375: 98.7140% ( 8) 00:07:31.683 14.375 - 14.474: 98.7645% ( 9) 00:07:31.683 14.474 - 14.572: 98.7814% ( 3) 00:07:31.683 14.572 - 14.671: 98.8095% ( 5) 00:07:31.683 14.671 - 14.769: 98.8656% ( 10) 00:07:31.683 14.769 - 14.868: 98.8937% ( 5) 00:07:31.683 14.868 - 14.966: 98.9049% ( 2) 00:07:31.683 14.966 - 15.065: 98.9386% ( 6) 00:07:31.683 15.065 - 15.163: 98.9442% ( 1) 00:07:31.683 15.163 - 15.262: 98.9499% ( 1) 00:07:31.683 15.262 - 15.360: 98.9611% ( 2) 00:07:31.683 15.360 - 15.458: 98.9723% ( 2) 00:07:31.683 15.458 - 15.557: 98.9779% ( 1) 00:07:31.683 15.557 - 15.655: 98.9835% ( 1) 00:07:31.683 15.655 - 15.754: 98.9892% ( 1) 00:07:31.683 15.754 - 15.852: 99.0004% ( 2) 00:07:31.683 15.852 - 15.951: 99.0060% ( 1) 00:07:31.683 15.951 - 16.049: 99.0172% ( 2) 00:07:31.683 16.246 - 16.345: 99.0229% ( 1) 00:07:31.683 16.443 - 16.542: 99.0285% ( 1) 00:07:31.683 16.837 - 16.935: 99.0397% ( 2) 00:07:31.683 17.329 - 17.428: 99.0453% ( 1) 00:07:31.683 17.526 - 17.625: 99.0509% ( 1) 00:07:31.683 17.822 - 17.920: 99.0566% ( 1) 00:07:31.683 18.314 - 18.412: 99.0622% ( 1) 00:07:31.683 18.609 - 18.708: 99.0678% ( 1) 00:07:31.683 18.708 - 18.806: 99.0734% ( 1) 00:07:31.683 18.806 - 18.905: 99.0846% ( 2) 00:07:31.683 19.200 - 19.298: 99.0902% ( 1) 00:07:31.683 19.397 - 19.495: 99.0959% ( 1) 00:07:31.683 19.791 - 19.889: 99.1015% ( 1) 00:07:31.683 19.889 - 19.988: 99.1408% ( 7) 00:07:31.683 19.988 - 20.086: 99.1520% ( 2) 00:07:31.683 20.086 - 20.185: 99.1745% ( 4) 00:07:31.683 20.185 - 20.283: 99.1913% ( 3) 00:07:31.683 20.382 - 20.480: 99.2082% ( 3) 00:07:31.683 20.578 - 20.677: 99.2138% ( 1) 00:07:31.683 20.677 - 20.775: 99.2194% ( 1) 00:07:31.683 20.874 - 20.972: 99.2306% ( 2) 00:07:31.683 20.972 - 21.071: 99.2363% ( 1) 00:07:31.683 21.071 - 21.169: 99.2419% ( 1) 00:07:31.683 21.268 - 21.366: 99.2475% ( 1) 00:07:31.683 21.760 - 21.858: 99.2531% ( 1) 00:07:31.683 22.055 - 22.154: 99.2812% ( 5) 00:07:31.683 22.154 - 22.252: 99.3542% ( 13) 00:07:31.683 22.252 - 22.351: 99.5114% ( 28) 00:07:31.683 22.351 - 22.449: 99.6631% ( 27) 00:07:31.683 22.449 - 22.548: 99.7417% ( 14) 00:07:31.683 22.548 - 22.646: 99.7810% ( 7) 00:07:31.683 22.646 - 22.745: 99.7978% ( 3) 00:07:31.683 22.745 - 22.843: 99.8147% ( 3) 00:07:31.683 22.843 - 22.942: 99.8203% ( 1) 00:07:31.683 22.942 - 23.040: 99.8371% ( 3) 00:07:31.683 23.138 - 23.237: 99.8428% ( 1) 00:07:31.683 23.729 - 23.828: 99.8484% ( 1) 00:07:31.683 23.926 - 24.025: 99.8540% ( 1) 00:07:31.683 25.108 - 25.206: 99.8596% ( 1) 00:07:31.683 26.191 - 26.388: 99.8652% ( 1) 00:07:31.683 26.978 - 27.175: 99.8708% ( 1) 00:07:31.683 27.175 - 27.372: 99.8765% ( 1) 00:07:31.683 27.963 - 28.160: 99.8821% ( 1) 00:07:31.683 29.735 - 29.932: 99.8877% ( 1) 00:07:31.683 32.098 - 32.295: 99.8989% ( 2) 00:07:31.683 32.295 - 32.492: 99.9045% ( 1) 00:07:31.683 32.492 - 32.689: 99.9214% ( 3) 00:07:31.683 32.689 - 32.886: 99.9270% ( 1) 00:07:31.683 32.886 - 33.083: 99.9326% ( 1) 00:07:31.683 33.083 - 33.280: 99.9438% ( 2) 00:07:31.683 33.280 - 33.477: 99.9495% ( 1) 00:07:31.683 35.446 - 35.643: 99.9551% ( 1) 00:07:31.683 38.400 - 38.597: 99.9607% ( 1) 00:07:31.683 38.794 - 38.991: 99.9663% ( 1) 00:07:31.683 39.582 - 39.778: 99.9719% ( 1) 00:07:31.683 41.945 - 42.142: 99.9775% ( 1) 00:07:31.683 51.988 - 52.382: 99.9832% ( 1) 00:07:31.683 53.563 - 53.957: 99.9888% ( 1) 00:07:31.683 63.409 - 63.803: 99.9944% ( 1) 00:07:31.683 261.514 - 263.089: 100.0000% ( 1) 00:07:31.683 00:07:31.683 ************************************ 00:07:31.683 END TEST nvme_overhead 00:07:31.683 ************************************ 00:07:31.683 00:07:31.683 real 0m1.197s 00:07:31.683 user 0m1.063s 00:07:31.683 sys 0m0.101s 00:07:31.683 07:39:21 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.683 07:39:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:31.684 07:39:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:31.684 07:39:21 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:31.684 07:39:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.684 07:39:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.684 ************************************ 00:07:31.684 START TEST nvme_arbitration 00:07:31.684 ************************************ 00:07:31.684 07:39:21 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:34.986 Initializing NVMe Controllers 00:07:34.986 Attached to 0000:00:11.0 00:07:34.986 Attached to 0000:00:13.0 00:07:34.986 Attached to 0000:00:10.0 00:07:34.986 Attached to 0000:00:12.0 00:07:34.986 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:34.986 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:34.986 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:34.986 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:34.986 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:34.986 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:34.986 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:34.986 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:34.986 Initialization complete. Launching workers. 00:07:34.986 Starting thread on core 1 with urgent priority queue 00:07:34.986 Starting thread on core 2 with urgent priority queue 00:07:34.986 Starting thread on core 3 with urgent priority queue 00:07:34.986 Starting thread on core 0 with urgent priority queue 00:07:34.986 QEMU NVMe Ctrl (12341 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:34.986 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:34.986 QEMU NVMe Ctrl (12343 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:34.986 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:34.986 QEMU NVMe Ctrl (12340 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:34.986 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:34.986 ======================================================== 00:07:34.986 00:07:34.986 ************************************ 00:07:34.986 END TEST nvme_arbitration 00:07:34.986 ************************************ 00:07:34.986 00:07:34.986 real 0m3.310s 00:07:34.986 user 0m9.327s 00:07:34.986 sys 0m0.097s 00:07:34.986 07:39:24 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.986 07:39:24 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:34.986 07:39:24 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.986 ************************************ 00:07:34.986 START TEST nvme_single_aen 00:07:34.986 ************************************ 00:07:34.986 07:39:24 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:34.986 Asynchronous Event Request test 00:07:34.986 Attached to 0000:00:11.0 00:07:34.986 Attached to 0000:00:13.0 00:07:34.986 Attached to 0000:00:10.0 00:07:34.986 Attached to 0000:00:12.0 00:07:34.986 Reset controller to setup AER completions for this process 00:07:34.986 Registering asynchronous event callbacks... 00:07:34.986 Getting orig temperature thresholds of all controllers 00:07:34.986 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:34.986 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:34.986 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:34.986 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:34.986 Setting all controllers temperature threshold low to trigger AER 00:07:34.986 Waiting for all controllers temperature threshold to be set lower 00:07:34.986 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:34.986 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:34.986 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:34.986 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:34.986 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:34.986 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:34.986 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:34.986 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:34.986 Waiting for all controllers to trigger AER and reset threshold 00:07:34.986 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.986 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.986 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.986 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:34.986 Cleaning up... 00:07:34.986 ************************************ 00:07:34.986 END TEST nvme_single_aen 00:07:34.986 ************************************ 00:07:34.986 00:07:34.986 real 0m0.230s 00:07:34.986 user 0m0.081s 00:07:34.986 sys 0m0.102s 00:07:34.986 07:39:24 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.986 07:39:24 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:34.986 07:39:24 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.986 07:39:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.986 ************************************ 00:07:34.986 START TEST nvme_doorbell_aers 00:07:34.986 ************************************ 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:34.986 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:35.247 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:35.247 07:39:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:35.247 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:35.247 07:39:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:35.247 [2024-11-29 07:39:25.168594] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:07:45.222 Executing: test_write_invalid_db 00:07:45.222 Waiting for AER completion... 00:07:45.222 Failure: test_write_invalid_db 00:07:45.222 00:07:45.222 Executing: test_invalid_db_write_overflow_sq 00:07:45.222 Waiting for AER completion... 00:07:45.222 Failure: test_invalid_db_write_overflow_sq 00:07:45.222 00:07:45.222 Executing: test_invalid_db_write_overflow_cq 00:07:45.222 Waiting for AER completion... 00:07:45.222 Failure: test_invalid_db_write_overflow_cq 00:07:45.222 00:07:45.222 07:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:45.222 07:39:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:45.481 [2024-11-29 07:39:35.209867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:07:55.445 Executing: test_write_invalid_db 00:07:55.445 Waiting for AER completion... 00:07:55.445 Failure: test_write_invalid_db 00:07:55.445 00:07:55.445 Executing: test_invalid_db_write_overflow_sq 00:07:55.445 Waiting for AER completion... 00:07:55.445 Failure: test_invalid_db_write_overflow_sq 00:07:55.445 00:07:55.445 Executing: test_invalid_db_write_overflow_cq 00:07:55.445 Waiting for AER completion... 00:07:55.445 Failure: test_invalid_db_write_overflow_cq 00:07:55.445 00:07:55.445 07:39:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:55.445 07:39:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:55.445 [2024-11-29 07:39:45.259839] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:05.413 Executing: test_write_invalid_db 00:08:05.413 Waiting for AER completion... 00:08:05.413 Failure: test_write_invalid_db 00:08:05.413 00:08:05.413 Executing: test_invalid_db_write_overflow_sq 00:08:05.413 Waiting for AER completion... 00:08:05.413 Failure: test_invalid_db_write_overflow_sq 00:08:05.413 00:08:05.413 Executing: test_invalid_db_write_overflow_cq 00:08:05.413 Waiting for AER completion... 00:08:05.413 Failure: test_invalid_db_write_overflow_cq 00:08:05.413 00:08:05.413 07:39:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:05.413 07:39:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:05.413 [2024-11-29 07:39:55.265305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.381 Executing: test_write_invalid_db 00:08:15.381 Waiting for AER completion... 00:08:15.381 Failure: test_write_invalid_db 00:08:15.381 00:08:15.381 Executing: test_invalid_db_write_overflow_sq 00:08:15.381 Waiting for AER completion... 00:08:15.381 Failure: test_invalid_db_write_overflow_sq 00:08:15.381 00:08:15.381 Executing: test_invalid_db_write_overflow_cq 00:08:15.381 Waiting for AER completion... 00:08:15.381 Failure: test_invalid_db_write_overflow_cq 00:08:15.381 00:08:15.381 ************************************ 00:08:15.381 END TEST nvme_doorbell_aers 00:08:15.381 ************************************ 00:08:15.381 00:08:15.381 real 0m40.185s 00:08:15.381 user 0m34.202s 00:08:15.381 sys 0m5.645s 00:08:15.381 07:40:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.381 07:40:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:15.381 07:40:05 nvme -- nvme/nvme.sh@97 -- # uname 00:08:15.381 07:40:05 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:15.381 07:40:05 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:15.381 07:40:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:15.381 07:40:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.381 07:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.381 ************************************ 00:08:15.381 START TEST nvme_multi_aen 00:08:15.381 ************************************ 00:08:15.381 07:40:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:15.638 [2024-11-29 07:40:05.325726] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.325778] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.325787] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.327161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.327197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.327209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.328218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.328247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.328254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.329233] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.329259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 [2024-11-29 07:40:05.329267] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63051) is not found. Dropping the request. 00:08:15.638 Child process pid: 63577 00:08:15.638 [Child] Asynchronous Event Request test 00:08:15.638 [Child] Attached to 0000:00:11.0 00:08:15.638 [Child] Attached to 0000:00:13.0 00:08:15.638 [Child] Attached to 0000:00:10.0 00:08:15.638 [Child] Attached to 0000:00:12.0 00:08:15.638 [Child] Registering asynchronous event callbacks... 00:08:15.638 [Child] Getting orig temperature thresholds of all controllers 00:08:15.638 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:15.638 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 [Child] Cleaning up... 00:08:15.638 Asynchronous Event Request test 00:08:15.638 Attached to 0000:00:11.0 00:08:15.638 Attached to 0000:00:13.0 00:08:15.638 Attached to 0000:00:10.0 00:08:15.638 Attached to 0000:00:12.0 00:08:15.638 Reset controller to setup AER completions for this process 00:08:15.638 Registering asynchronous event callbacks... 00:08:15.638 Getting orig temperature thresholds of all controllers 00:08:15.638 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:15.638 Setting all controllers temperature threshold low to trigger AER 00:08:15.638 Waiting for all controllers temperature threshold to be set lower 00:08:15.638 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:15.638 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:15.638 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:15.638 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:15.638 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:15.638 Waiting for all controllers to trigger AER and reset threshold 00:08:15.638 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.638 Cleaning up... 00:08:15.638 ************************************ 00:08:15.638 END TEST nvme_multi_aen 00:08:15.638 ************************************ 00:08:15.638 00:08:15.638 real 0m0.442s 00:08:15.638 user 0m0.149s 00:08:15.638 sys 0m0.188s 00:08:15.638 07:40:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.638 07:40:05 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 07:40:05 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:15.896 07:40:05 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:15.896 07:40:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.896 07:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 ************************************ 00:08:15.896 START TEST nvme_startup 00:08:15.896 ************************************ 00:08:15.896 07:40:05 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:15.896 Initializing NVMe Controllers 00:08:15.896 Attached to 0000:00:11.0 00:08:15.896 Attached to 0000:00:13.0 00:08:15.896 Attached to 0000:00:10.0 00:08:15.896 Attached to 0000:00:12.0 00:08:15.896 Initialization complete. 00:08:15.896 Time used:155406.094 (us). 00:08:15.896 00:08:15.896 real 0m0.217s 00:08:15.896 user 0m0.061s 00:08:15.896 sys 0m0.104s 00:08:15.896 07:40:05 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.896 07:40:05 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 ************************************ 00:08:15.896 END TEST nvme_startup 00:08:15.896 ************************************ 00:08:16.153 07:40:05 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:16.153 07:40:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.153 07:40:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.153 07:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.153 ************************************ 00:08:16.153 START TEST nvme_multi_secondary 00:08:16.153 ************************************ 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63627 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63628 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:16.153 07:40:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:19.436 Initializing NVMe Controllers 00:08:19.436 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.436 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:19.436 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:19.436 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:19.436 Initialization complete. Launching workers. 00:08:19.436 ======================================================== 00:08:19.436 Latency(us) 00:08:19.436 Device Information : IOPS MiB/s Average min max 00:08:19.436 PCIE (0000:00:11.0) NSID 1 from core 1: 8145.17 31.82 1963.94 731.30 5498.17 00:08:19.436 PCIE (0000:00:13.0) NSID 1 from core 1: 8145.17 31.82 1963.99 728.51 5751.27 00:08:19.436 PCIE (0000:00:10.0) NSID 1 from core 1: 8145.17 31.82 1963.01 709.65 5487.55 00:08:19.436 PCIE (0000:00:12.0) NSID 1 from core 1: 8145.17 31.82 1963.91 727.75 5403.20 00:08:19.436 PCIE (0000:00:12.0) NSID 2 from core 1: 8145.17 31.82 1963.88 726.04 5493.15 00:08:19.436 PCIE (0000:00:12.0) NSID 3 from core 1: 8145.17 31.82 1963.86 714.19 5651.03 00:08:19.436 ======================================================== 00:08:19.436 Total : 48871.01 190.90 1963.77 709.65 5751.27 00:08:19.436 00:08:19.436 Initializing NVMe Controllers 00:08:19.436 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.436 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.436 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:19.436 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:19.436 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:19.436 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:19.436 Initialization complete. Launching workers. 00:08:19.436 ======================================================== 00:08:19.436 Latency(us) 00:08:19.436 Device Information : IOPS MiB/s Average min max 00:08:19.436 PCIE (0000:00:11.0) NSID 1 from core 2: 3401.22 13.29 4703.81 991.68 12830.13 00:08:19.436 PCIE (0000:00:13.0) NSID 1 from core 2: 3401.22 13.29 4703.84 1004.22 12818.09 00:08:19.436 PCIE (0000:00:10.0) NSID 1 from core 2: 3401.22 13.29 4702.49 1086.57 12499.79 00:08:19.436 PCIE (0000:00:12.0) NSID 1 from core 2: 3401.22 13.29 4702.96 1030.47 12108.23 00:08:19.436 PCIE (0000:00:12.0) NSID 2 from core 2: 3401.22 13.29 4703.27 971.45 12898.98 00:08:19.436 PCIE (0000:00:12.0) NSID 3 from core 2: 3401.22 13.29 4703.81 990.35 11954.55 00:08:19.436 ======================================================== 00:08:19.436 Total : 20407.35 79.72 4703.36 971.45 12898.98 00:08:19.436 00:08:19.436 07:40:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63627 00:08:21.334 Initializing NVMe Controllers 00:08:21.334 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.334 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.334 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.334 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.334 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:21.334 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:21.334 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:21.334 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:21.334 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:21.334 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:21.334 Initialization complete. Launching workers. 00:08:21.334 ======================================================== 00:08:21.334 Latency(us) 00:08:21.334 Device Information : IOPS MiB/s Average min max 00:08:21.334 PCIE (0000:00:11.0) NSID 1 from core 0: 11369.53 44.41 1406.89 690.44 5655.74 00:08:21.334 PCIE (0000:00:13.0) NSID 1 from core 0: 11369.53 44.41 1406.88 686.85 5863.97 00:08:21.334 PCIE (0000:00:10.0) NSID 1 from core 0: 11369.13 44.41 1406.06 666.85 6207.38 00:08:21.334 PCIE (0000:00:12.0) NSID 1 from core 0: 11369.53 44.41 1406.83 679.21 6058.33 00:08:21.334 PCIE (0000:00:12.0) NSID 2 from core 0: 11369.53 44.41 1406.81 590.04 6185.41 00:08:21.334 PCIE (0000:00:12.0) NSID 3 from core 0: 11369.53 44.41 1406.79 566.82 6106.78 00:08:21.334 ======================================================== 00:08:21.334 Total : 68216.80 266.47 1406.71 566.82 6207.38 00:08:21.334 00:08:21.334 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63628 00:08:21.334 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63697 00:08:21.335 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:21.335 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63698 00:08:21.335 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:21.335 07:40:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:24.614 Initializing NVMe Controllers 00:08:24.614 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.614 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:24.614 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:24.614 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:24.614 Initialization complete. Launching workers. 00:08:24.614 ======================================================== 00:08:24.614 Latency(us) 00:08:24.614 Device Information : IOPS MiB/s Average min max 00:08:24.614 PCIE (0000:00:11.0) NSID 1 from core 0: 7826.97 30.57 2043.80 717.60 6116.65 00:08:24.614 PCIE (0000:00:13.0) NSID 1 from core 0: 7826.97 30.57 2043.88 720.21 6878.61 00:08:24.614 PCIE (0000:00:10.0) NSID 1 from core 0: 7826.97 30.57 2042.89 691.19 6435.15 00:08:24.614 PCIE (0000:00:12.0) NSID 1 from core 0: 7826.97 30.57 2043.81 713.00 6408.59 00:08:24.614 PCIE (0000:00:12.0) NSID 2 from core 0: 7826.97 30.57 2043.77 723.01 6518.82 00:08:24.614 PCIE (0000:00:12.0) NSID 3 from core 0: 7826.97 30.57 2043.73 724.97 6046.34 00:08:24.614 ======================================================== 00:08:24.614 Total : 46961.80 183.44 2043.65 691.19 6878.61 00:08:24.614 00:08:24.614 Initializing NVMe Controllers 00:08:24.614 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.614 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.614 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:24.614 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:24.614 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:24.614 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:24.614 Initialization complete. Launching workers. 00:08:24.614 ======================================================== 00:08:24.614 Latency(us) 00:08:24.614 Device Information : IOPS MiB/s Average min max 00:08:24.614 PCIE (0000:00:11.0) NSID 1 from core 1: 7848.58 30.66 2038.16 749.42 6220.88 00:08:24.614 PCIE (0000:00:13.0) NSID 1 from core 1: 7848.58 30.66 2038.19 746.70 5813.21 00:08:24.614 PCIE (0000:00:10.0) NSID 1 from core 1: 7848.58 30.66 2037.22 720.51 5655.49 00:08:24.614 PCIE (0000:00:12.0) NSID 1 from core 1: 7848.58 30.66 2038.13 745.18 6161.94 00:08:24.614 PCIE (0000:00:12.0) NSID 2 from core 1: 7848.58 30.66 2038.11 738.25 6691.03 00:08:24.614 PCIE (0000:00:12.0) NSID 3 from core 1: 7848.58 30.66 2038.07 743.62 6061.83 00:08:24.614 ======================================================== 00:08:24.614 Total : 47091.46 183.95 2037.98 720.51 6691.03 00:08:24.614 00:08:27.143 Initializing NVMe Controllers 00:08:27.143 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.143 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.143 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.143 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.143 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:27.143 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:27.143 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:27.143 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:27.143 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:27.143 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:27.143 Initialization complete. Launching workers. 00:08:27.143 ======================================================== 00:08:27.143 Latency(us) 00:08:27.143 Device Information : IOPS MiB/s Average min max 00:08:27.143 PCIE (0000:00:11.0) NSID 1 from core 2: 4754.65 18.57 3364.61 764.66 13097.50 00:08:27.143 PCIE (0000:00:13.0) NSID 1 from core 2: 4754.65 18.57 3364.59 751.28 13230.73 00:08:27.143 PCIE (0000:00:10.0) NSID 1 from core 2: 4751.45 18.56 3365.85 741.22 13581.65 00:08:27.143 PCIE (0000:00:12.0) NSID 1 from core 2: 4751.45 18.56 3366.57 715.97 12620.50 00:08:27.143 PCIE (0000:00:12.0) NSID 2 from core 2: 4754.65 18.57 3364.25 749.97 13408.20 00:08:27.143 PCIE (0000:00:12.0) NSID 3 from core 2: 4754.65 18.57 3364.04 749.03 13429.45 00:08:27.143 ======================================================== 00:08:27.143 Total : 28521.47 111.41 3364.98 715.97 13581.65 00:08:27.143 00:08:27.143 ************************************ 00:08:27.143 END TEST nvme_multi_secondary 00:08:27.143 ************************************ 00:08:27.143 07:40:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63697 00:08:27.143 07:40:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63698 00:08:27.143 00:08:27.143 real 0m10.775s 00:08:27.143 user 0m18.414s 00:08:27.143 sys 0m0.618s 00:08:27.143 07:40:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.143 07:40:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:27.143 07:40:16 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:27.143 07:40:16 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62660 ]] 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1094 -- # kill 62660 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1095 -- # wait 62660 00:08:27.143 [2024-11-29 07:40:16.674906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.674992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.675026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.675047] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.677863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.677923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.677943] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.677965] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.680672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.680731] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.680750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.680771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.683561] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.683624] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.683643] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 [2024-11-29 07:40:16.683663] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63576) is not found. Dropping the request. 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:27.143 07:40:16 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.143 07:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.143 ************************************ 00:08:27.143 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:27.143 ************************************ 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:27.143 * Looking for test storage... 00:08:27.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.143 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.144 --rc genhtml_branch_coverage=1 00:08:27.144 --rc genhtml_function_coverage=1 00:08:27.144 --rc genhtml_legend=1 00:08:27.144 --rc geninfo_all_blocks=1 00:08:27.144 --rc geninfo_unexecuted_blocks=1 00:08:27.144 00:08:27.144 ' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.144 --rc genhtml_branch_coverage=1 00:08:27.144 --rc genhtml_function_coverage=1 00:08:27.144 --rc genhtml_legend=1 00:08:27.144 --rc geninfo_all_blocks=1 00:08:27.144 --rc geninfo_unexecuted_blocks=1 00:08:27.144 00:08:27.144 ' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.144 --rc genhtml_branch_coverage=1 00:08:27.144 --rc genhtml_function_coverage=1 00:08:27.144 --rc genhtml_legend=1 00:08:27.144 --rc geninfo_all_blocks=1 00:08:27.144 --rc geninfo_unexecuted_blocks=1 00:08:27.144 00:08:27.144 ' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.144 --rc genhtml_branch_coverage=1 00:08:27.144 --rc genhtml_function_coverage=1 00:08:27.144 --rc genhtml_legend=1 00:08:27.144 --rc geninfo_all_blocks=1 00:08:27.144 --rc geninfo_unexecuted_blocks=1 00:08:27.144 00:08:27.144 ' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:27.144 07:40:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63861 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63861 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63861 ']' 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.144 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 [2024-11-29 07:40:17.097721] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:27.403 [2024-11-29 07:40:17.097854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63861 ] 00:08:27.403 [2024-11-29 07:40:17.277075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.660 [2024-11-29 07:40:17.376598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.660 [2024-11-29 07:40:17.376715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.660 [2024-11-29 07:40:17.376964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.660 [2024-11-29 07:40:17.377101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.227 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.227 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:28.227 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:28.227 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.227 07:40:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.227 nvme0n1 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ZITQD.txt 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.227 true 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732866018 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63884 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:28.227 07:40:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:30.130 [2024-11-29 07:40:20.046865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:30.130 [2024-11-29 07:40:20.047438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:30.130 [2024-11-29 07:40:20.047545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:30.130 [2024-11-29 07:40:20.047602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.130 [2024-11-29 07:40:20.049304] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:30.130 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63884 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63884 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63884 00:08:30.130 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ZITQD.txt 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ZITQD.txt 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63861 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63861 ']' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63861 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63861 00:08:30.389 killing process with pid 63861 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63861' 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63861 00:08:30.389 07:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63861 00:08:31.765 07:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:31.765 07:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:31.765 00:08:31.765 real 0m4.792s 00:08:31.765 user 0m16.931s 00:08:31.765 sys 0m0.487s 00:08:31.765 07:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.765 07:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:31.765 ************************************ 00:08:31.765 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:31.765 ************************************ 00:08:31.765 07:40:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:31.765 07:40:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:31.765 07:40:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.765 07:40:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.765 07:40:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.765 ************************************ 00:08:31.765 START TEST nvme_fio 00:08:31.765 ************************************ 00:08:31.765 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:31.765 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:31.765 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:31.765 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:31.766 07:40:21 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:31.766 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:31.766 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:31.766 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:31.766 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:31.766 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:32.073 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:32.073 07:40:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:32.364 07:40:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:32.364 07:40:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:32.364 07:40:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:32.626 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:32.626 fio-3.35 00:08:32.626 Starting 1 thread 00:08:37.913 00:08:37.913 test: (groupid=0, jobs=1): err= 0: pid=64025: Fri Nov 29 07:40:27 2024 00:08:37.913 read: IOPS=21.0k, BW=81.9MiB/s (85.8MB/s)(164MiB/2001msec) 00:08:37.913 slat (nsec): min=3958, max=76720, avg=5874.46, stdev=2338.98 00:08:37.913 clat (usec): min=196, max=9008, avg=3033.17, stdev=937.38 00:08:37.913 lat (usec): min=201, max=9084, avg=3039.04, stdev=938.74 00:08:37.913 clat percentiles (usec): 00:08:37.913 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:08:37.913 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:08:37.913 | 70.00th=[ 2966], 80.00th=[ 3163], 90.00th=[ 3982], 95.00th=[ 5276], 00:08:37.913 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8356], 00:08:37.913 | 99.99th=[ 8848] 00:08:37.913 bw ( KiB/s): min=82483, max=84320, per=99.52%, avg=83433.00, stdev=920.12, samples=3 00:08:37.914 iops : min=20620, max=21080, avg=20858.00, stdev=230.42, samples=3 00:08:37.914 write: IOPS=20.8k, BW=81.4MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:08:37.914 slat (nsec): min=4177, max=77701, avg=6217.45, stdev=2405.97 00:08:37.914 clat (usec): min=231, max=8886, avg=3063.35, stdev=955.73 00:08:37.914 lat (usec): min=237, max=8901, avg=3069.57, stdev=957.09 00:08:37.914 clat percentiles (usec): 00:08:37.914 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2540], 00:08:37.914 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2868], 00:08:37.914 | 70.00th=[ 2999], 80.00th=[ 3195], 90.00th=[ 4047], 95.00th=[ 5407], 00:08:37.914 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8291], 00:08:37.914 | 99.99th=[ 8717] 00:08:37.914 bw ( KiB/s): min=82459, max=84408, per=100.00%, avg=83510.33, stdev=983.54, samples=3 00:08:37.914 iops : min=20614, max=21104, avg=20878.00, stdev=247.20, samples=3 00:08:37.914 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:08:37.914 lat (msec) : 2=0.67%, 4=89.11%, 10=10.16% 00:08:37.914 cpu : usr=99.10%, sys=0.15%, ctx=4, majf=0, minf=606 00:08:37.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.914 issued rwts: total=41938,41717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.914 00:08:37.914 Run status group 0 (all jobs): 00:08:37.914 READ: bw=81.9MiB/s (85.8MB/s), 81.9MiB/s-81.9MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:08:37.914 WRITE: bw=81.4MiB/s (85.4MB/s), 81.4MiB/s-81.4MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:08:37.914 ----------------------------------------------------- 00:08:37.914 Suppressions used: 00:08:37.914 count bytes template 00:08:37.914 1 32 /usr/src/fio/parse.c 00:08:37.914 1 8 libtcmalloc_minimal.so 00:08:37.914 ----------------------------------------------------- 00:08:37.914 00:08:37.914 07:40:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.914 07:40:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.914 07:40:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:37.914 07:40:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.174 07:40:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:38.174 07:40:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.432 07:40:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.432 07:40:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:38.432 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.433 07:40:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.693 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.693 fio-3.35 00:08:38.693 Starting 1 thread 00:08:43.981 00:08:43.981 test: (groupid=0, jobs=1): err= 0: pid=64080: Fri Nov 29 07:40:33 2024 00:08:43.981 read: IOPS=20.9k, BW=81.8MiB/s (85.7MB/s)(164MiB/2001msec) 00:08:43.981 slat (nsec): min=3359, max=72245, avg=5152.71, stdev=2451.30 00:08:43.981 clat (usec): min=315, max=9336, avg=3056.16, stdev=1089.44 00:08:43.981 lat (usec): min=320, max=9364, avg=3061.31, stdev=1090.56 00:08:43.981 clat percentiles (usec): 00:08:43.981 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:43.981 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2802], 00:08:43.981 | 70.00th=[ 3097], 80.00th=[ 3654], 90.00th=[ 4752], 95.00th=[ 5473], 00:08:43.981 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 8848], 00:08:43.981 | 99.99th=[ 9241] 00:08:43.981 bw ( KiB/s): min=81728, max=89836, per=100.00%, avg=86252.00, stdev=4134.93, samples=3 00:08:43.981 iops : min=20432, max=22459, avg=21563.00, stdev=1033.73, samples=3 00:08:43.981 write: IOPS=20.8k, BW=81.4MiB/s (85.3MB/s)(163MiB/2001msec); 0 zone resets 00:08:43.981 slat (nsec): min=3382, max=82230, avg=5359.39, stdev=2581.20 00:08:43.981 clat (usec): min=424, max=9282, avg=3054.01, stdev=1076.63 00:08:43.981 lat (usec): min=429, max=9289, avg=3059.37, stdev=1077.78 00:08:43.981 clat percentiles (usec): 00:08:43.981 | 1.00th=[ 1663], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:43.981 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2802], 00:08:43.981 | 70.00th=[ 3097], 80.00th=[ 3621], 90.00th=[ 4752], 95.00th=[ 5407], 00:08:43.981 | 99.00th=[ 6652], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 8848], 00:08:43.981 | 99.99th=[ 9110] 00:08:43.981 bw ( KiB/s): min=81440, max=89820, per=100.00%, avg=86369.33, stdev=4381.32, samples=3 00:08:43.981 iops : min=20360, max=22455, avg=21592.33, stdev=1095.33, samples=3 00:08:43.981 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:08:43.981 lat (msec) : 2=2.23%, 4=81.52%, 10=16.17% 00:08:43.981 cpu : usr=99.05%, sys=0.05%, ctx=14, majf=0, minf=606 00:08:43.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:43.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.981 issued rwts: total=41881,41676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.981 00:08:43.981 Run status group 0 (all jobs): 00:08:43.981 READ: bw=81.8MiB/s (85.7MB/s), 81.8MiB/s-81.8MiB/s (85.7MB/s-85.7MB/s), io=164MiB (172MB), run=2001-2001msec 00:08:43.981 WRITE: bw=81.4MiB/s (85.3MB/s), 81.4MiB/s-81.4MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:08:44.242 ----------------------------------------------------- 00:08:44.242 Suppressions used: 00:08:44.242 count bytes template 00:08:44.242 1 32 /usr/src/fio/parse.c 00:08:44.242 1 8 libtcmalloc_minimal.so 00:08:44.242 ----------------------------------------------------- 00:08:44.242 00:08:44.242 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:44.242 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.242 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:44.242 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:44.503 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:44.503 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:44.763 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:44.763 07:40:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:44.763 07:40:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:45.024 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.024 fio-3.35 00:08:45.024 Starting 1 thread 00:08:50.316 00:08:50.316 test: (groupid=0, jobs=1): err= 0: pid=64147: Fri Nov 29 07:40:40 2024 00:08:50.316 read: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec) 00:08:50.316 slat (nsec): min=4227, max=82220, avg=6018.70, stdev=3395.18 00:08:50.316 clat (usec): min=594, max=10457, avg=3893.77, stdev=1455.08 00:08:50.316 lat (usec): min=616, max=10463, avg=3899.79, stdev=1456.35 00:08:50.316 clat percentiles (usec): 00:08:50.316 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:50.316 | 30.00th=[ 2835], 40.00th=[ 3064], 50.00th=[ 3359], 60.00th=[ 3785], 00:08:50.317 | 70.00th=[ 4490], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6718], 00:08:50.317 | 99.00th=[ 7767], 99.50th=[ 8291], 99.90th=[ 9110], 99.95th=[ 9241], 00:08:50.317 | 99.99th=[ 9896] 00:08:50.317 bw ( KiB/s): min=65232, max=70913, per=100.00%, avg=68955.00, stdev=3225.66, samples=3 00:08:50.317 iops : min=16308, max=17728, avg=17238.67, stdev=806.34, samples=3 00:08:50.317 write: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(127MiB/2001msec); 0 zone resets 00:08:50.317 slat (nsec): min=4266, max=65022, avg=6169.74, stdev=3313.47 00:08:50.317 clat (usec): min=880, max=10521, avg=3932.22, stdev=1460.00 00:08:50.317 lat (usec): min=886, max=10537, avg=3938.39, stdev=1461.21 00:08:50.317 clat percentiles (usec): 00:08:50.317 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2704], 00:08:50.317 | 30.00th=[ 2868], 40.00th=[ 3097], 50.00th=[ 3425], 60.00th=[ 3818], 00:08:50.317 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6194], 95.00th=[ 6783], 00:08:50.317 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[ 9372], 00:08:50.317 | 99.99th=[10028] 00:08:50.317 bw ( KiB/s): min=64752, max=71110, per=100.00%, avg=68858.00, stdev=3561.46, samples=3 00:08:50.317 iops : min=16188, max=17777, avg=17214.33, stdev=890.21, samples=3 00:08:50.317 lat (usec) : 750=0.01%, 1000=0.01% 00:08:50.317 lat (msec) : 2=0.51%, 4=62.55%, 10=36.93%, 20=0.01% 00:08:50.317 cpu : usr=98.75%, sys=0.05%, ctx=3, majf=0, minf=606 00:08:50.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.317 issued rwts: total=32563,32638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.317 00:08:50.317 Run status group 0 (all jobs): 00:08:50.317 READ: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:50.317 WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=127MiB (134MB), run=2001-2001msec 00:08:50.579 ----------------------------------------------------- 00:08:50.579 Suppressions used: 00:08:50.579 count bytes template 00:08:50.579 1 32 /usr/src/fio/parse.c 00:08:50.579 1 8 libtcmalloc_minimal.so 00:08:50.579 ----------------------------------------------------- 00:08:50.579 00:08:50.579 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:50.579 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.579 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:50.579 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.842 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:50.842 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.104 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.104 07:40:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.104 07:40:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.104 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.104 fio-3.35 00:08:51.104 Starting 1 thread 00:08:59.270 00:08:59.270 test: (groupid=0, jobs=1): err= 0: pid=64209: Fri Nov 29 07:40:48 2024 00:08:59.270 read: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(127MiB/2001msec) 00:08:59.270 slat (nsec): min=4336, max=89225, avg=6308.45, stdev=3567.06 00:08:59.270 clat (usec): min=645, max=10513, avg=3910.13, stdev=1428.30 00:08:59.270 lat (usec): min=652, max=10539, avg=3916.44, stdev=1429.75 00:08:59.270 clat percentiles (usec): 00:08:59.270 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:08:59.270 | 30.00th=[ 2966], 40.00th=[ 3130], 50.00th=[ 3326], 60.00th=[ 3654], 00:08:59.270 | 70.00th=[ 4424], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6718], 00:08:59.270 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[ 9241], 99.95th=[ 9634], 00:08:59.270 | 99.99th=[10159] 00:08:59.270 bw ( KiB/s): min=59896, max=72600, per=100.00%, avg=66336.00, stdev=6353.83, samples=3 00:08:59.270 iops : min=14974, max=18150, avg=16584.00, stdev=1588.46, samples=3 00:08:59.270 write: IOPS=16.2k, BW=63.4MiB/s (66.4MB/s)(127MiB/2001msec); 0 zone resets 00:08:59.270 slat (usec): min=4, max=138, avg= 6.55, stdev= 3.76 00:08:59.270 clat (usec): min=554, max=10578, avg=3959.12, stdev=1440.35 00:08:59.270 lat (usec): min=561, max=10604, avg=3965.67, stdev=1441.81 00:08:59.270 clat percentiles (usec): 00:08:59.270 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2802], 00:08:59.270 | 30.00th=[ 2999], 40.00th=[ 3163], 50.00th=[ 3359], 60.00th=[ 3720], 00:08:59.270 | 70.00th=[ 4490], 80.00th=[ 5342], 90.00th=[ 6194], 95.00th=[ 6783], 00:08:59.270 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9634], 00:08:59.270 | 99.99th=[10159] 00:08:59.270 bw ( KiB/s): min=59736, max=72696, per=100.00%, avg=66242.67, stdev=6480.16, samples=3 00:08:59.270 iops : min=14934, max=18174, avg=16560.67, stdev=1620.04, samples=3 00:08:59.270 lat (usec) : 750=0.02% 00:08:59.270 lat (msec) : 2=0.18%, 4=64.52%, 10=35.27%, 20=0.02% 00:08:59.270 cpu : usr=98.55%, sys=0.20%, ctx=4, majf=0, minf=604 00:08:59.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.270 issued rwts: total=32389,32453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.270 00:08:59.270 Run status group 0 (all jobs): 00:08:59.270 READ: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:59.270 WRITE: bw=63.4MiB/s (66.4MB/s), 63.4MiB/s-63.4MiB/s (66.4MB/s-66.4MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:59.270 ----------------------------------------------------- 00:08:59.270 Suppressions used: 00:08:59.270 count bytes template 00:08:59.270 1 32 /usr/src/fio/parse.c 00:08:59.270 1 8 libtcmalloc_minimal.so 00:08:59.270 ----------------------------------------------------- 00:08:59.270 00:08:59.270 ************************************ 00:08:59.270 END TEST nvme_fio 00:08:59.270 ************************************ 00:08:59.270 07:40:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.270 07:40:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:59.270 00:08:59.270 real 0m26.930s 00:08:59.270 user 0m18.886s 00:08:59.270 sys 0m12.577s 00:08:59.270 07:40:48 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.270 07:40:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 00:08:59.270 real 1m36.089s 00:08:59.270 user 3m39.925s 00:08:59.270 sys 0m22.887s 00:08:59.270 07:40:48 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.270 ************************************ 00:08:59.270 END TEST nvme 00:08:59.270 ************************************ 00:08:59.270 07:40:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 07:40:48 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:59.270 07:40:48 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:59.270 07:40:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.270 07:40:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.270 07:40:48 -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 ************************************ 00:08:59.270 START TEST nvme_scc 00:08:59.270 ************************************ 00:08:59.270 07:40:48 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:59.270 * Looking for test storage... 00:08:59.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.270 07:40:48 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.270 07:40:48 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.270 07:40:48 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.270 07:40:48 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.270 07:40:48 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:59.271 07:40:48 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.271 07:40:48 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.271 --rc genhtml_branch_coverage=1 00:08:59.271 --rc genhtml_function_coverage=1 00:08:59.271 --rc genhtml_legend=1 00:08:59.271 --rc geninfo_all_blocks=1 00:08:59.271 --rc geninfo_unexecuted_blocks=1 00:08:59.271 00:08:59.271 ' 00:08:59.271 07:40:48 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.271 --rc genhtml_branch_coverage=1 00:08:59.271 --rc genhtml_function_coverage=1 00:08:59.271 --rc genhtml_legend=1 00:08:59.271 --rc geninfo_all_blocks=1 00:08:59.271 --rc geninfo_unexecuted_blocks=1 00:08:59.271 00:08:59.271 ' 00:08:59.271 07:40:48 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.271 --rc genhtml_branch_coverage=1 00:08:59.271 --rc genhtml_function_coverage=1 00:08:59.271 --rc genhtml_legend=1 00:08:59.271 --rc geninfo_all_blocks=1 00:08:59.271 --rc geninfo_unexecuted_blocks=1 00:08:59.271 00:08:59.271 ' 00:08:59.271 07:40:48 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.271 --rc genhtml_branch_coverage=1 00:08:59.271 --rc genhtml_function_coverage=1 00:08:59.271 --rc genhtml_legend=1 00:08:59.271 --rc geninfo_all_blocks=1 00:08:59.271 --rc geninfo_unexecuted_blocks=1 00:08:59.271 00:08:59.271 ' 00:08:59.271 07:40:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.271 07:40:48 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.271 07:40:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.271 07:40:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.271 07:40:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.271 07:40:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:59.271 07:40:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:59.271 07:40:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:59.271 07:40:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.271 07:40:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:59.271 07:40:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:59.271 07:40:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:59.271 07:40:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:59.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:59.533 Waiting for block devices as requested 00:08:59.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.793 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.793 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.092 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:05.092 07:40:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:05.092 07:40:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.092 07:40:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:05.092 07:40:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.092 07:40:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:05.092 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.093 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:05.094 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:05.095 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.096 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:05.097 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:05.098 07:40:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.098 07:40:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:05.098 07:40:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.098 07:40:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:05.098 07:40:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.099 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.100 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.101 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:05.102 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:05.103 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:05.104 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:05.105 07:40:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.105 07:40:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:05.105 07:40:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.105 07:40:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:05.105 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.106 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.107 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:05.108 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:05.109 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.110 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.111 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.112 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.377 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:05.378 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:05.379 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.380 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.381 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:05.382 07:40:55 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.382 07:40:55 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:05.382 07:40:55 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.382 07:40:55 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.382 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:05.383 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.384 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:05.385 07:40:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:05.385 07:40:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:05.385 07:40:55 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:05.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.218 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.480 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.480 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.480 07:40:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:06.480 07:40:56 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.480 07:40:56 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.480 07:40:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:06.480 ************************************ 00:09:06.480 START TEST nvme_simple_copy 00:09:06.480 ************************************ 00:09:06.480 07:40:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:06.741 Initializing NVMe Controllers 00:09:06.741 Attaching to 0000:00:10.0 00:09:06.741 Controller supports SCC. Attached to 0000:00:10.0 00:09:06.741 Namespace ID: 1 size: 6GB 00:09:06.741 Initialization complete. 00:09:06.741 00:09:06.741 Controller QEMU NVMe Ctrl (12340 ) 00:09:06.741 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:06.741 Namespace Block Size:4096 00:09:06.741 Writing LBAs 0 to 63 with Random Data 00:09:06.741 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:06.741 LBAs matching Written Data: 64 00:09:06.741 ************************************ 00:09:06.741 END TEST nvme_simple_copy 00:09:06.741 ************************************ 00:09:06.741 00:09:06.741 real 0m0.274s 00:09:06.741 user 0m0.114s 00:09:06.741 sys 0m0.058s 00:09:06.741 07:40:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.741 07:40:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:06.741 ************************************ 00:09:06.741 END TEST nvme_scc 00:09:06.741 ************************************ 00:09:06.741 00:09:06.741 real 0m7.925s 00:09:06.741 user 0m1.154s 00:09:06.741 sys 0m1.479s 00:09:06.741 07:40:56 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.741 07:40:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:06.741 07:40:56 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:06.741 07:40:56 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:06.741 07:40:56 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:06.741 07:40:56 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:06.741 07:40:56 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:06.741 07:40:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.741 07:40:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.741 07:40:56 -- common/autotest_common.sh@10 -- # set +x 00:09:06.741 ************************************ 00:09:06.741 START TEST nvme_fdp 00:09:06.741 ************************************ 00:09:06.741 07:40:56 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:07.001 * Looking for test storage... 00:09:07.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.001 07:40:56 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.001 --rc genhtml_branch_coverage=1 00:09:07.001 --rc genhtml_function_coverage=1 00:09:07.001 --rc genhtml_legend=1 00:09:07.001 --rc geninfo_all_blocks=1 00:09:07.001 --rc geninfo_unexecuted_blocks=1 00:09:07.001 00:09:07.001 ' 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.001 --rc genhtml_branch_coverage=1 00:09:07.001 --rc genhtml_function_coverage=1 00:09:07.001 --rc genhtml_legend=1 00:09:07.001 --rc geninfo_all_blocks=1 00:09:07.001 --rc geninfo_unexecuted_blocks=1 00:09:07.001 00:09:07.001 ' 00:09:07.001 07:40:56 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.002 --rc genhtml_branch_coverage=1 00:09:07.002 --rc genhtml_function_coverage=1 00:09:07.002 --rc genhtml_legend=1 00:09:07.002 --rc geninfo_all_blocks=1 00:09:07.002 --rc geninfo_unexecuted_blocks=1 00:09:07.002 00:09:07.002 ' 00:09:07.002 07:40:56 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.002 --rc genhtml_branch_coverage=1 00:09:07.002 --rc genhtml_function_coverage=1 00:09:07.002 --rc genhtml_legend=1 00:09:07.002 --rc geninfo_all_blocks=1 00:09:07.002 --rc geninfo_unexecuted_blocks=1 00:09:07.002 00:09:07.002 ' 00:09:07.002 07:40:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.002 07:40:56 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.002 07:40:56 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.002 07:40:56 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.002 07:40:56 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.002 07:40:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.002 07:40:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.002 07:40:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.002 07:40:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:07.002 07:40:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:07.002 07:40:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:07.002 07:40:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.002 07:40:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:07.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.523 Waiting for block devices as requested 00:09:07.523 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.523 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.784 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.784 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.162 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:13.162 07:41:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:13.162 07:41:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.162 07:41:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:13.162 07:41:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.162 07:41:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:13.162 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.163 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:13.164 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.165 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:13.166 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.167 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:13.168 07:41:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.168 07:41:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:13.168 07:41:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.168 07:41:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.168 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.169 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.170 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:13.171 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:13.172 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.173 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:13.174 07:41:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.174 07:41:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:13.174 07:41:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.174 07:41:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:13.174 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:13.175 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:13.176 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:13.177 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.178 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.179 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:13.180 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.181 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.182 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.183 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:13.184 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:13.185 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.186 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:13.187 07:41:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.187 07:41:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:13.187 07:41:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.187 07:41:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.187 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:13.448 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.449 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:13.450 07:41:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:13.450 07:41:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:13.450 07:41:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:13.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.282 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.282 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.282 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.282 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.542 07:41:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:14.542 07:41:04 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.542 07:41:04 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.542 07:41:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:14.542 ************************************ 00:09:14.543 START TEST nvme_flexible_data_placement 00:09:14.543 ************************************ 00:09:14.543 07:41:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:14.803 Initializing NVMe Controllers 00:09:14.803 Attaching to 0000:00:13.0 00:09:14.803 Controller supports FDP Attached to 0000:00:13.0 00:09:14.803 Namespace ID: 1 Endurance Group ID: 1 00:09:14.804 Initialization complete. 00:09:14.804 00:09:14.804 ================================== 00:09:14.804 == FDP tests for Namespace: #01 == 00:09:14.804 ================================== 00:09:14.804 00:09:14.804 Get Feature: FDP: 00:09:14.804 ================= 00:09:14.804 Enabled: Yes 00:09:14.804 FDP configuration Index: 0 00:09:14.804 00:09:14.804 FDP configurations log page 00:09:14.804 =========================== 00:09:14.804 Number of FDP configurations: 1 00:09:14.804 Version: 0 00:09:14.804 Size: 112 00:09:14.804 FDP Configuration Descriptor: 0 00:09:14.804 Descriptor Size: 96 00:09:14.804 Reclaim Group Identifier format: 2 00:09:14.804 FDP Volatile Write Cache: Not Present 00:09:14.804 FDP Configuration: Valid 00:09:14.804 Vendor Specific Size: 0 00:09:14.804 Number of Reclaim Groups: 2 00:09:14.804 Number of Recalim Unit Handles: 8 00:09:14.804 Max Placement Identifiers: 128 00:09:14.804 Number of Namespaces Suppprted: 256 00:09:14.804 Reclaim unit Nominal Size: 6000000 bytes 00:09:14.804 Estimated Reclaim Unit Time Limit: Not Reported 00:09:14.804 RUH Desc #000: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #001: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #002: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #003: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #004: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #005: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #006: RUH Type: Initially Isolated 00:09:14.804 RUH Desc #007: RUH Type: Initially Isolated 00:09:14.804 00:09:14.804 FDP reclaim unit handle usage log page 00:09:14.804 ====================================== 00:09:14.804 Number of Reclaim Unit Handles: 8 00:09:14.804 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:14.804 RUH Usage Desc #001: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #002: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #003: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #004: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #005: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #006: RUH Attributes: Unused 00:09:14.804 RUH Usage Desc #007: RUH Attributes: Unused 00:09:14.804 00:09:14.804 FDP statistics log page 00:09:14.804 ======================= 00:09:14.804 Host bytes with metadata written: 983367680 00:09:14.804 Media bytes with metadata written: 983605248 00:09:14.804 Media bytes erased: 0 00:09:14.804 00:09:14.804 FDP Reclaim unit handle status 00:09:14.804 ============================== 00:09:14.804 Number of RUHS descriptors: 2 00:09:14.804 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001630 00:09:14.804 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:14.804 00:09:14.804 FDP write on placement id: 0 success 00:09:14.804 00:09:14.804 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:14.804 00:09:14.804 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:14.804 00:09:14.804 Get Feature: FDP Events for Placement handle: #0 00:09:14.804 ======================== 00:09:14.804 Number of FDP Events: 6 00:09:14.804 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:14.804 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:14.804 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:14.804 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:14.804 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:14.804 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:14.804 00:09:14.804 FDP events log page 00:09:14.804 =================== 00:09:14.804 Number of FDP events: 1 00:09:14.804 FDP Event #0: 00:09:14.804 Event Type: RU Not Written to Capacity 00:09:14.804 Placement Identifier: Valid 00:09:14.804 NSID: Valid 00:09:14.804 Location: Valid 00:09:14.804 Placement Identifier: 0 00:09:14.804 Event Timestamp: 6 00:09:14.804 Namespace Identifier: 1 00:09:14.804 Reclaim Group Identifier: 0 00:09:14.804 Reclaim Unit Handle Identifier: 0 00:09:14.804 00:09:14.804 FDP test passed 00:09:14.804 00:09:14.804 real 0m0.236s 00:09:14.804 user 0m0.079s 00:09:14.804 sys 0m0.056s 00:09:14.804 07:41:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.804 ************************************ 00:09:14.804 END TEST nvme_flexible_data_placement 00:09:14.804 ************************************ 00:09:14.804 07:41:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:14.804 ************************************ 00:09:14.804 END TEST nvme_fdp 00:09:14.804 ************************************ 00:09:14.804 00:09:14.804 real 0m7.889s 00:09:14.804 user 0m1.147s 00:09:14.804 sys 0m1.429s 00:09:14.804 07:41:04 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.804 07:41:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:14.804 07:41:04 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:14.804 07:41:04 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:14.804 07:41:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.804 07:41:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.804 07:41:04 -- common/autotest_common.sh@10 -- # set +x 00:09:14.804 ************************************ 00:09:14.804 START TEST nvme_rpc 00:09:14.804 ************************************ 00:09:14.804 07:41:04 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:14.804 * Looking for test storage... 00:09:14.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.804 07:41:04 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.804 07:41:04 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.804 07:41:04 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.065 07:41:04 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.065 --rc genhtml_branch_coverage=1 00:09:15.065 --rc genhtml_function_coverage=1 00:09:15.065 --rc genhtml_legend=1 00:09:15.065 --rc geninfo_all_blocks=1 00:09:15.065 --rc geninfo_unexecuted_blocks=1 00:09:15.065 00:09:15.065 ' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.065 --rc genhtml_branch_coverage=1 00:09:15.065 --rc genhtml_function_coverage=1 00:09:15.065 --rc genhtml_legend=1 00:09:15.065 --rc geninfo_all_blocks=1 00:09:15.065 --rc geninfo_unexecuted_blocks=1 00:09:15.065 00:09:15.065 ' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.065 --rc genhtml_branch_coverage=1 00:09:15.065 --rc genhtml_function_coverage=1 00:09:15.065 --rc genhtml_legend=1 00:09:15.065 --rc geninfo_all_blocks=1 00:09:15.065 --rc geninfo_unexecuted_blocks=1 00:09:15.065 00:09:15.065 ' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.065 --rc genhtml_branch_coverage=1 00:09:15.065 --rc genhtml_function_coverage=1 00:09:15.065 --rc genhtml_legend=1 00:09:15.065 --rc geninfo_all_blocks=1 00:09:15.065 --rc geninfo_unexecuted_blocks=1 00:09:15.065 00:09:15.065 ' 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:15.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65595 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65595 00:09:15.065 07:41:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65595 ']' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.065 07:41:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.065 [2024-11-29 07:41:04.907225] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:15.065 [2024-11-29 07:41:04.907344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65595 ] 00:09:15.332 [2024-11-29 07:41:05.068353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.332 [2024-11-29 07:41:05.164945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.332 [2024-11-29 07:41:05.164961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.903 07:41:05 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.903 07:41:05 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.903 07:41:05 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:16.164 Nvme0n1 00:09:16.164 07:41:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:16.164 07:41:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:16.424 request: 00:09:16.424 { 00:09:16.424 "bdev_name": "Nvme0n1", 00:09:16.424 "filename": "non_existing_file", 00:09:16.424 "method": "bdev_nvme_apply_firmware", 00:09:16.424 "req_id": 1 00:09:16.424 } 00:09:16.424 Got JSON-RPC error response 00:09:16.424 response: 00:09:16.424 { 00:09:16.424 "code": -32603, 00:09:16.424 "message": "open file failed." 00:09:16.424 } 00:09:16.424 07:41:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:16.424 07:41:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:16.424 07:41:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:16.686 07:41:06 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:16.686 07:41:06 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65595 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65595 ']' 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65595 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65595 00:09:16.686 killing process with pid 65595 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65595' 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65595 00:09:16.686 07:41:06 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65595 00:09:18.065 ************************************ 00:09:18.065 END TEST nvme_rpc 00:09:18.065 ************************************ 00:09:18.065 00:09:18.065 real 0m3.070s 00:09:18.065 user 0m5.862s 00:09:18.065 sys 0m0.484s 00:09:18.065 07:41:07 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.065 07:41:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.065 07:41:07 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.065 07:41:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.065 07:41:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.065 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:09:18.065 ************************************ 00:09:18.065 START TEST nvme_rpc_timeouts 00:09:18.065 ************************************ 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.065 * Looking for test storage... 00:09:18.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.065 07:41:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.065 --rc genhtml_branch_coverage=1 00:09:18.065 --rc genhtml_function_coverage=1 00:09:18.065 --rc genhtml_legend=1 00:09:18.065 --rc geninfo_all_blocks=1 00:09:18.065 --rc geninfo_unexecuted_blocks=1 00:09:18.065 00:09:18.065 ' 00:09:18.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.065 --rc genhtml_branch_coverage=1 00:09:18.065 --rc genhtml_function_coverage=1 00:09:18.065 --rc genhtml_legend=1 00:09:18.065 --rc geninfo_all_blocks=1 00:09:18.065 --rc geninfo_unexecuted_blocks=1 00:09:18.065 00:09:18.065 ' 00:09:18.065 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.066 --rc genhtml_branch_coverage=1 00:09:18.066 --rc genhtml_function_coverage=1 00:09:18.066 --rc genhtml_legend=1 00:09:18.066 --rc geninfo_all_blocks=1 00:09:18.066 --rc geninfo_unexecuted_blocks=1 00:09:18.066 00:09:18.066 ' 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.066 --rc genhtml_branch_coverage=1 00:09:18.066 --rc genhtml_function_coverage=1 00:09:18.066 --rc genhtml_legend=1 00:09:18.066 --rc geninfo_all_blocks=1 00:09:18.066 --rc geninfo_unexecuted_blocks=1 00:09:18.066 00:09:18.066 ' 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65654 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65654 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65692 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65692 00:09:18.066 07:41:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65692 ']' 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.066 07:41:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:18.066 [2024-11-29 07:41:08.004355] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:18.066 [2024-11-29 07:41:08.004736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65692 ] 00:09:18.326 [2024-11-29 07:41:08.176488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.326 [2024-11-29 07:41:08.252957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.326 [2024-11-29 07:41:08.252956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.261 07:41:08 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.261 07:41:08 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:19.261 07:41:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:19.261 Checking default timeout settings: 00:09:19.261 07:41:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:19.261 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:19.261 Making settings changes with rpc: 00:09:19.261 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:19.519 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:19.519 Check default vs. modified settings: 00:09:19.519 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65654 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65654 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 Setting action_on_timeout is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65654 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65654 00:09:20.087 Setting timeout_us is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65654 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65654 00:09:20.087 Setting timeout_admin_us is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65654 /tmp/settings_modified_65654 00:09:20.087 07:41:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65692 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65692 ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65692 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65692 00:09:20.087 killing process with pid 65692 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65692' 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65692 00:09:20.087 07:41:09 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65692 00:09:21.462 RPC TIMEOUT SETTING TEST PASSED. 00:09:21.462 07:41:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:21.462 00:09:21.462 real 0m3.567s 00:09:21.462 user 0m6.933s 00:09:21.462 sys 0m0.519s 00:09:21.462 ************************************ 00:09:21.462 END TEST nvme_rpc_timeouts 00:09:21.462 ************************************ 00:09:21.462 07:41:11 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.462 07:41:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:21.462 07:41:11 -- spdk/autotest.sh@239 -- # uname -s 00:09:21.462 07:41:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:21.462 07:41:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.462 07:41:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.462 07:41:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.462 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.462 ************************************ 00:09:21.462 START TEST sw_hotplug 00:09:21.462 ************************************ 00:09:21.462 07:41:11 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.723 * Looking for test storage... 00:09:21.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.723 07:41:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.723 --rc genhtml_branch_coverage=1 00:09:21.723 --rc genhtml_function_coverage=1 00:09:21.723 --rc genhtml_legend=1 00:09:21.723 --rc geninfo_all_blocks=1 00:09:21.723 --rc geninfo_unexecuted_blocks=1 00:09:21.723 00:09:21.723 ' 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.723 --rc genhtml_branch_coverage=1 00:09:21.723 --rc genhtml_function_coverage=1 00:09:21.723 --rc genhtml_legend=1 00:09:21.723 --rc geninfo_all_blocks=1 00:09:21.723 --rc geninfo_unexecuted_blocks=1 00:09:21.723 00:09:21.723 ' 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.723 --rc genhtml_branch_coverage=1 00:09:21.723 --rc genhtml_function_coverage=1 00:09:21.723 --rc genhtml_legend=1 00:09:21.723 --rc geninfo_all_blocks=1 00:09:21.723 --rc geninfo_unexecuted_blocks=1 00:09:21.723 00:09:21.723 ' 00:09:21.723 07:41:11 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.723 --rc genhtml_branch_coverage=1 00:09:21.723 --rc genhtml_function_coverage=1 00:09:21.723 --rc genhtml_legend=1 00:09:21.723 --rc geninfo_all_blocks=1 00:09:21.723 --rc geninfo_unexecuted_blocks=1 00:09:21.723 00:09:21.723 ' 00:09:21.723 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.985 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.985 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.985 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.985 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.985 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:21.985 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:21.985 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:22.246 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.246 07:41:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:22.247 07:41:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:22.247 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:22.247 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:22.247 07:41:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.510 Waiting for block devices as requested 00:09:22.510 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.771 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.771 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.771 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.062 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.062 07:41:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:28.062 07:41:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.324 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:28.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.324 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:28.585 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:28.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.846 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:28.846 07:41:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66543 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:29.107 07:41:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:29.107 07:41:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:29.107 07:41:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:29.107 07:41:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:29.107 07:41:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:29.107 07:41:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:29.107 Initializing NVMe Controllers 00:09:29.107 Attaching to 0000:00:10.0 00:09:29.107 Attaching to 0000:00:11.0 00:09:29.107 Attached to 0000:00:11.0 00:09:29.107 Attached to 0000:00:10.0 00:09:29.107 Initialization complete. Starting I/O... 00:09:29.369 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:29.369 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:29.369 00:09:30.316 QEMU NVMe Ctrl (12341 ): 2660 I/Os completed (+2660) 00:09:30.316 QEMU NVMe Ctrl (12340 ): 2663 I/Os completed (+2663) 00:09:30.316 00:09:31.260 QEMU NVMe Ctrl (12341 ): 5908 I/Os completed (+3248) 00:09:31.260 QEMU NVMe Ctrl (12340 ): 5903 I/Os completed (+3240) 00:09:31.260 00:09:32.203 QEMU NVMe Ctrl (12341 ): 9124 I/Os completed (+3216) 00:09:32.203 QEMU NVMe Ctrl (12340 ): 9119 I/Os completed (+3216) 00:09:32.203 00:09:33.142 QEMU NVMe Ctrl (12341 ): 12524 I/Os completed (+3400) 00:09:33.142 QEMU NVMe Ctrl (12340 ): 12522 I/Os completed (+3403) 00:09:33.142 00:09:34.514 QEMU NVMe Ctrl (12341 ): 16215 I/Os completed (+3691) 00:09:34.514 QEMU NVMe Ctrl (12340 ): 16204 I/Os completed (+3682) 00:09:34.514 00:09:35.081 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:35.081 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.081 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.081 [2024-11-29 07:41:24.852784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:35.081 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:35.081 [2024-11-29 07:41:24.853840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.853950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.853968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.853983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:35.081 [2024-11-29 07:41:24.855384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.855420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.855433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.855455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.081 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.081 [2024-11-29 07:41:24.873468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:35.081 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:35.081 [2024-11-29 07:41:24.874412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.874539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.874572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.874590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:35.081 [2024-11-29 07:41:24.875932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.081 [2024-11-29 07:41:24.875966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.082 [2024-11-29 07:41:24.875979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.082 [2024-11-29 07:41:24.875989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.082 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:35.082 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:35.082 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:35.082 EAL: Scan for (pci) bus failed. 00:09:35.082 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:35.082 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:35.082 07:41:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:35.082 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:35.082 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:35.082 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:35.082 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:35.339 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:35.339 Attaching to 0000:00:10.0 00:09:35.339 Attached to 0000:00:10.0 00:09:35.339 QEMU NVMe Ctrl (12340 ): 84 I/Os completed (+84) 00:09:35.339 00:09:35.339 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:35.339 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:35.339 07:41:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:35.339 Attaching to 0000:00:11.0 00:09:35.339 Attached to 0000:00:11.0 00:09:36.272 QEMU NVMe Ctrl (12340 ): 3812 I/Os completed (+3728) 00:09:36.272 QEMU NVMe Ctrl (12341 ): 3446 I/Os completed (+3446) 00:09:36.272 00:09:37.207 QEMU NVMe Ctrl (12340 ): 7511 I/Os completed (+3699) 00:09:37.207 QEMU NVMe Ctrl (12341 ): 7157 I/Os completed (+3711) 00:09:37.207 00:09:38.141 QEMU NVMe Ctrl (12340 ): 11202 I/Os completed (+3691) 00:09:38.141 QEMU NVMe Ctrl (12341 ): 10867 I/Os completed (+3710) 00:09:38.141 00:09:39.514 QEMU NVMe Ctrl (12340 ): 14898 I/Os completed (+3696) 00:09:39.514 QEMU NVMe Ctrl (12341 ): 14557 I/Os completed (+3690) 00:09:39.514 00:09:40.447 QEMU NVMe Ctrl (12340 ): 18572 I/Os completed (+3674) 00:09:40.447 QEMU NVMe Ctrl (12341 ): 18241 I/Os completed (+3684) 00:09:40.447 00:09:41.380 QEMU NVMe Ctrl (12340 ): 22260 I/Os completed (+3688) 00:09:41.380 QEMU NVMe Ctrl (12341 ): 21929 I/Os completed (+3688) 00:09:41.380 00:09:42.314 QEMU NVMe Ctrl (12340 ): 25939 I/Os completed (+3679) 00:09:42.314 QEMU NVMe Ctrl (12341 ): 25608 I/Os completed (+3679) 00:09:42.314 00:09:43.249 QEMU NVMe Ctrl (12340 ): 29625 I/Os completed (+3686) 00:09:43.249 QEMU NVMe Ctrl (12341 ): 29308 I/Os completed (+3700) 00:09:43.249 00:09:44.184 QEMU NVMe Ctrl (12340 ): 33278 I/Os completed (+3653) 00:09:44.184 QEMU NVMe Ctrl (12341 ): 32976 I/Os completed (+3668) 00:09:44.184 00:09:45.122 QEMU NVMe Ctrl (12340 ): 36652 I/Os completed (+3374) 00:09:45.122 QEMU NVMe Ctrl (12341 ): 36408 I/Os completed (+3432) 00:09:45.122 00:09:46.498 QEMU NVMe Ctrl (12340 ): 40188 I/Os completed (+3536) 00:09:46.498 QEMU NVMe Ctrl (12341 ): 39937 I/Os completed (+3529) 00:09:46.498 00:09:47.432 QEMU NVMe Ctrl (12340 ): 43866 I/Os completed (+3678) 00:09:47.432 QEMU NVMe Ctrl (12341 ): 43613 I/Os completed (+3676) 00:09:47.432 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.432 [2024-11-29 07:41:37.117647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:47.432 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.432 [2024-11-29 07:41:37.119058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.119165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.119198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.119268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.432 [2024-11-29 07:41:37.120921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.120993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.121029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.121058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.432 [2024-11-29 07:41:37.135489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:47.432 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:47.432 [2024-11-29 07:41:37.136468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.136521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.136550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.136575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:47.432 [2024-11-29 07:41:37.137934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.138037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.138091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 [2024-11-29 07:41:37.138116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:47.432 Attaching to 0000:00:10.0 00:09:47.432 Attached to 0000:00:10.0 00:09:47.432 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:47.690 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.690 07:41:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:47.690 Attaching to 0000:00:11.0 00:09:47.690 Attached to 0000:00:11.0 00:09:48.262 QEMU NVMe Ctrl (12340 ): 2774 I/Os completed (+2774) 00:09:48.262 QEMU NVMe Ctrl (12341 ): 2418 I/Os completed (+2418) 00:09:48.262 00:09:49.200 QEMU NVMe Ctrl (12340 ): 6209 I/Os completed (+3435) 00:09:49.200 QEMU NVMe Ctrl (12341 ): 5849 I/Os completed (+3431) 00:09:49.200 00:09:50.134 QEMU NVMe Ctrl (12340 ): 9930 I/Os completed (+3721) 00:09:50.134 QEMU NVMe Ctrl (12341 ): 9568 I/Os completed (+3719) 00:09:50.134 00:09:51.152 QEMU NVMe Ctrl (12340 ): 13641 I/Os completed (+3711) 00:09:51.152 QEMU NVMe Ctrl (12341 ): 13288 I/Os completed (+3720) 00:09:51.152 00:09:52.532 QEMU NVMe Ctrl (12340 ): 17286 I/Os completed (+3645) 00:09:52.532 QEMU NVMe Ctrl (12341 ): 17046 I/Os completed (+3758) 00:09:52.532 00:09:53.468 QEMU NVMe Ctrl (12340 ): 20353 I/Os completed (+3067) 00:09:53.468 QEMU NVMe Ctrl (12341 ): 20104 I/Os completed (+3058) 00:09:53.468 00:09:54.400 QEMU NVMe Ctrl (12340 ): 24048 I/Os completed (+3695) 00:09:54.400 QEMU NVMe Ctrl (12341 ): 23806 I/Os completed (+3702) 00:09:54.400 00:09:55.337 QEMU NVMe Ctrl (12340 ): 27752 I/Os completed (+3704) 00:09:55.337 QEMU NVMe Ctrl (12341 ): 27504 I/Os completed (+3698) 00:09:55.337 00:09:56.272 QEMU NVMe Ctrl (12340 ): 31442 I/Os completed (+3690) 00:09:56.272 QEMU NVMe Ctrl (12341 ): 31193 I/Os completed (+3689) 00:09:56.272 00:09:57.207 QEMU NVMe Ctrl (12340 ): 35125 I/Os completed (+3683) 00:09:57.207 QEMU NVMe Ctrl (12341 ): 34888 I/Os completed (+3695) 00:09:57.207 00:09:58.141 QEMU NVMe Ctrl (12340 ): 38815 I/Os completed (+3690) 00:09:58.141 QEMU NVMe Ctrl (12341 ): 38577 I/Os completed (+3689) 00:09:58.141 00:09:59.517 QEMU NVMe Ctrl (12340 ): 42592 I/Os completed (+3777) 00:09:59.517 QEMU NVMe Ctrl (12341 ): 42695 I/Os completed (+4118) 00:09:59.517 00:09:59.517 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:59.517 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.517 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.517 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.517 [2024-11-29 07:41:49.389867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.517 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.517 [2024-11-29 07:41:49.391409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.391749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.391782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.391803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.517 [2024-11-29 07:41:49.394888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.394972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.394992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 [2024-11-29 07:41:49.395011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.517 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.518 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.518 [2024-11-29 07:41:49.417538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.518 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.518 [2024-11-29 07:41:49.419073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.419144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.419169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.419188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.518 [2024-11-29 07:41:49.421358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.421420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.421464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 [2024-11-29 07:41:49.421481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.518 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.518 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.518 EAL: Scan for (pci) bus failed. 00:09:59.518 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.778 Attaching to 0000:00:10.0 00:09:59.778 Attached to 0000:00:10.0 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.778 07:41:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.778 Attaching to 0000:00:11.0 00:09:59.778 Attached to 0000:00:11.0 00:09:59.778 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.778 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.778 [2024-11-29 07:41:49.678232] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:12.010 07:42:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.010 07:42:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.010 07:42:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.82 00:10:12.010 07:42:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.82 00:10:12.010 07:42:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:12.010 07:42:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:10:12.010 07:42:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:10:12.010 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 07:42:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66543 00:10:18.602 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66543) - No such process 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66543 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67093 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67093 00:10:18.602 07:42:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.602 07:42:07 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67093 ']' 00:10:18.602 07:42:07 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.602 07:42:07 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.602 07:42:07 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.602 07:42:07 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.603 07:42:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.603 [2024-11-29 07:42:07.764196] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:18.603 [2024-11-29 07:42:07.764625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67093 ] 00:10:18.603 [2024-11-29 07:42:07.931940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.603 [2024-11-29 07:42:08.082867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.176 07:42:08 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:19.177 07:42:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:19.177 07:42:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.798 07:42:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.798 07:42:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.798 07:42:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:25.798 07:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:25.798 [2024-11-29 07:42:14.978068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:25.798 [2024-11-29 07:42:14.979364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:14.979403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:14.979419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:14.979440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:14.979457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:14.979466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:14.979474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:14.979483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:14.979490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:14.979502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:14.979509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:14.979518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:15.378054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.798 [2024-11-29 07:42:15.379349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:15.379380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:15.379393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:15.379407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:15.379416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:15.379423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:15.379433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:15.379440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:15.379462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 [2024-11-29 07:42:15.379470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.798 [2024-11-29 07:42:15.379479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.798 [2024-11-29 07:42:15.379486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.798 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:25.798 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.798 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.798 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.798 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.799 07:42:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.799 07:42:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.799 07:42:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.799 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:26.057 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.057 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.057 07:42:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.270 07:42:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.270 [2024-11-29 07:42:27.878267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.270 07:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.270 [2024-11-29 07:42:27.879496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.270 [2024-11-29 07:42:27.879528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.270 [2024-11-29 07:42:27.879539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.270 [2024-11-29 07:42:27.879555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.270 [2024-11-29 07:42:27.879563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.270 [2024-11-29 07:42:27.879572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.270 [2024-11-29 07:42:27.879579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.271 [2024-11-29 07:42:27.879587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.271 [2024-11-29 07:42:27.879593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.271 [2024-11-29 07:42:27.879601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.271 [2024-11-29 07:42:27.879607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.271 [2024-11-29 07:42:27.879615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.529 [2024-11-29 07:42:28.378264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.529 [2024-11-29 07:42:28.379414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.529 [2024-11-29 07:42:28.379458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.529 [2024-11-29 07:42:28.379470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.529 [2024-11-29 07:42:28.379482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.529 [2024-11-29 07:42:28.379491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.529 [2024-11-29 07:42:28.379498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.529 [2024-11-29 07:42:28.379506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.529 [2024-11-29 07:42:28.379512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.529 [2024-11-29 07:42:28.379520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.529 [2024-11-29 07:42:28.379527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.529 [2024-11-29 07:42:28.379535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.529 [2024-11-29 07:42:28.379541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.529 07:42:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.529 07:42:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.529 07:42:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.529 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.787 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.787 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.787 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.787 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.788 07:42:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.005 07:42:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.005 07:42:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.005 [2024-11-29 07:42:40.778889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.005 [2024-11-29 07:42:40.780300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.005 [2024-11-29 07:42:40.780406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.005 [2024-11-29 07:42:40.780479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.005 [2024-11-29 07:42:40.780520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.005 [2024-11-29 07:42:40.780539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.005 [2024-11-29 07:42:40.780566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.005 [2024-11-29 07:42:40.780590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.005 [2024-11-29 07:42:40.780609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.005 [2024-11-29 07:42:40.780679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.005 [2024-11-29 07:42:40.780707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.005 [2024-11-29 07:42:40.780724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.005 [2024-11-29 07:42:40.780749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.265 [2024-11-29 07:42:41.178881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.265 [2024-11-29 07:42:41.180185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.265 [2024-11-29 07:42:41.180283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.265 [2024-11-29 07:42:41.180345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.265 [2024-11-29 07:42:41.180376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.265 [2024-11-29 07:42:41.180394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.265 [2024-11-29 07:42:41.180417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.265 [2024-11-29 07:42:41.180454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.265 [2024-11-29 07:42:41.180473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.265 [2024-11-29 07:42:41.180548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.265 [2024-11-29 07:42:41.180574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.266 [2024-11-29 07:42:41.180592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.266 [2024-11-29 07:42:41.180614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.526 07:42:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.526 07:42:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.526 07:42:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.526 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.527 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.790 07:42:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.74 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.74 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.74 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.74 2 00:11:04.023 remove_attach_helper took 44.74s to complete (handling 2 nvme drive(s)) 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.023 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.023 07:42:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:04.024 07:42:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:04.024 07:42:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:04.024 07:42:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:04.024 07:42:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:04.024 07:42:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:04.024 07:42:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.609 07:42:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.609 07:42:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.609 07:42:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.609 07:42:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.609 [2024-11-29 07:42:59.746819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.609 [2024-11-29 07:42:59.747937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:42:59.747976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:42:59.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:42:59.748008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:42:59.748016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:42:59.748024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:42:59.748032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:42:59.748043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:42:59.748049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:42:59.748058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:42:59.748065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:42:59.748075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.609 07:43:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.609 07:43:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.609 07:43:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:10.609 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.609 [2024-11-29 07:43:00.446828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.609 [2024-11-29 07:43:00.447817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:43:00.447852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:43:00.447866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:43:00.447884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:43:00.447894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:43:00.447901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:43:00.447910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:43:00.447917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:43:00.447925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.609 [2024-11-29 07:43:00.447932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.609 [2024-11-29 07:43:00.447940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.609 [2024-11-29 07:43:00.447947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.871 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.871 07:43:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.871 07:43:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.871 07:43:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.132 07:43:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.132 07:43:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.132 07:43:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.132 07:43:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.369 [2024-11-29 07:43:13.147477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.369 [2024-11-29 07:43:13.148586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.369 [2024-11-29 07:43:13.148625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.369 [2024-11-29 07:43:13.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.369 [2024-11-29 07:43:13.148658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.369 [2024-11-29 07:43:13.148666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.369 [2024-11-29 07:43:13.148677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.369 [2024-11-29 07:43:13.148685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.369 [2024-11-29 07:43:13.148694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.369 [2024-11-29 07:43:13.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.369 [2024-11-29 07:43:13.148710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.369 [2024-11-29 07:43:13.148717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.369 [2024-11-29 07:43:13.148724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.369 07:43:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.369 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.943 [2024-11-29 07:43:13.647470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.943 [2024-11-29 07:43:13.648398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.943 [2024-11-29 07:43:13.648427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.943 [2024-11-29 07:43:13.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.943 [2024-11-29 07:43:13.648464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.943 [2024-11-29 07:43:13.648476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.943 [2024-11-29 07:43:13.648482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.943 [2024-11-29 07:43:13.648491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.943 [2024-11-29 07:43:13.648498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.943 [2024-11-29 07:43:13.648507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.943 [2024-11-29 07:43:13.648514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.943 [2024-11-29 07:43:13.648522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.943 [2024-11-29 07:43:13.648529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.943 07:43:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.943 07:43:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.943 07:43:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.943 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.204 07:43:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.443 07:43:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.443 07:43:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.443 07:43:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.443 07:43:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.443 [2024-11-29 07:43:26.047696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.443 [2024-11-29 07:43:26.048934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.443 [2024-11-29 07:43:26.049100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.443 [2024-11-29 07:43:26.049166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.443 [2024-11-29 07:43:26.049206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.443 [2024-11-29 07:43:26.049225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.443 [2024-11-29 07:43:26.049250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.443 [2024-11-29 07:43:26.049274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.443 [2024-11-29 07:43:26.049295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.443 [2024-11-29 07:43:26.049351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.443 [2024-11-29 07:43:26.049382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.443 [2024-11-29 07:43:26.049398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.443 [2024-11-29 07:43:26.049422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.443 07:43:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.443 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.704 [2024-11-29 07:43:26.547696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.704 [2024-11-29 07:43:26.548620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.704 [2024-11-29 07:43:26.548648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.704 [2024-11-29 07:43:26.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.704 [2024-11-29 07:43:26.548670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.704 [2024-11-29 07:43:26.548679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.704 [2024-11-29 07:43:26.548686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.704 [2024-11-29 07:43:26.548695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.704 [2024-11-29 07:43:26.548702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.704 [2024-11-29 07:43:26.548711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.704 [2024-11-29 07:43:26.548718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.704 [2024-11-29 07:43:26.548729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.704 [2024-11-29 07:43:26.548735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.704 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.704 07:43:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.704 07:43:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.704 07:43:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.967 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.228 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.228 07:43:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.30 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.30 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.30 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.30 2 00:11:49.474 remove_attach_helper took 45.30s to complete (handling 2 nvme drive(s)) 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:49.474 07:43:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67093 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67093 ']' 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67093 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67093 00:11:49.474 killing process with pid 67093 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67093' 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67093 00:11:49.474 07:43:38 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67093 00:11:50.417 07:43:40 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:50.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:51.252 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.252 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.252 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.252 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.515 ************************************ 00:11:51.515 END TEST sw_hotplug 00:11:51.515 ************************************ 00:11:51.515 00:11:51.515 real 2m29.845s 00:11:51.515 user 1m51.484s 00:11:51.515 sys 0m16.943s 00:11:51.515 07:43:41 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.515 07:43:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 07:43:41 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:51.515 07:43:41 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.515 07:43:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.515 07:43:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.515 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:11:51.515 ************************************ 00:11:51.515 START TEST nvme_xnvme 00:11:51.515 ************************************ 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.515 * Looking for test storage... 00:11:51.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.515 07:43:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 07:43:41 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:51.515 07:43:41 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:51.515 07:43:41 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:51.515 07:43:41 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:51.516 07:43:41 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:51.780 07:43:41 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:51.780 07:43:41 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:51.780 07:43:41 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:51.781 07:43:41 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:51.781 07:43:41 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:51.781 07:43:41 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:51.781 07:43:41 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:51.781 #define SPDK_CONFIG_H 00:11:51.781 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:51.781 #define SPDK_CONFIG_APPS 1 00:11:51.781 #define SPDK_CONFIG_ARCH native 00:11:51.781 #define SPDK_CONFIG_ASAN 1 00:11:51.781 #undef SPDK_CONFIG_AVAHI 00:11:51.781 #undef SPDK_CONFIG_CET 00:11:51.781 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:51.781 #define SPDK_CONFIG_COVERAGE 1 00:11:51.781 #define SPDK_CONFIG_CROSS_PREFIX 00:11:51.781 #undef SPDK_CONFIG_CRYPTO 00:11:51.781 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:51.781 #undef SPDK_CONFIG_CUSTOMOCF 00:11:51.781 #undef SPDK_CONFIG_DAOS 00:11:51.781 #define SPDK_CONFIG_DAOS_DIR 00:11:51.781 #define SPDK_CONFIG_DEBUG 1 00:11:51.781 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:51.781 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.781 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:51.781 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:51.781 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:51.781 #undef SPDK_CONFIG_DPDK_UADK 00:11:51.781 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.781 #define SPDK_CONFIG_EXAMPLES 1 00:11:51.781 #undef SPDK_CONFIG_FC 00:11:51.781 #define SPDK_CONFIG_FC_PATH 00:11:51.781 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:51.781 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:51.781 #define SPDK_CONFIG_FSDEV 1 00:11:51.781 #undef SPDK_CONFIG_FUSE 00:11:51.781 #undef SPDK_CONFIG_FUZZER 00:11:51.781 #define SPDK_CONFIG_FUZZER_LIB 00:11:51.781 #undef SPDK_CONFIG_GOLANG 00:11:51.781 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:51.781 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:51.781 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:51.781 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:51.781 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:51.781 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:51.781 #undef SPDK_CONFIG_HAVE_LZ4 00:11:51.781 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:51.781 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:51.781 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:51.781 #define SPDK_CONFIG_IDXD 1 00:11:51.781 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:51.781 #undef SPDK_CONFIG_IPSEC_MB 00:11:51.781 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:51.781 #define SPDK_CONFIG_ISAL 1 00:11:51.781 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:51.781 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:51.781 #define SPDK_CONFIG_LIBDIR 00:11:51.781 #undef SPDK_CONFIG_LTO 00:11:51.781 #define SPDK_CONFIG_MAX_LCORES 128 00:11:51.781 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:51.781 #define SPDK_CONFIG_NVME_CUSE 1 00:11:51.781 #undef SPDK_CONFIG_OCF 00:11:51.781 #define SPDK_CONFIG_OCF_PATH 00:11:51.781 #define SPDK_CONFIG_OPENSSL_PATH 00:11:51.781 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:51.781 #define SPDK_CONFIG_PGO_DIR 00:11:51.781 #undef SPDK_CONFIG_PGO_USE 00:11:51.781 #define SPDK_CONFIG_PREFIX /usr/local 00:11:51.781 #undef SPDK_CONFIG_RAID5F 00:11:51.781 #undef SPDK_CONFIG_RBD 00:11:51.781 #define SPDK_CONFIG_RDMA 1 00:11:51.781 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:51.781 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:51.781 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:51.781 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:51.781 #define SPDK_CONFIG_SHARED 1 00:11:51.781 #undef SPDK_CONFIG_SMA 00:11:51.781 #define SPDK_CONFIG_TESTS 1 00:11:51.781 #undef SPDK_CONFIG_TSAN 00:11:51.781 #define SPDK_CONFIG_UBLK 1 00:11:51.781 #define SPDK_CONFIG_UBSAN 1 00:11:51.781 #undef SPDK_CONFIG_UNIT_TESTS 00:11:51.781 #undef SPDK_CONFIG_URING 00:11:51.781 #define SPDK_CONFIG_URING_PATH 00:11:51.781 #undef SPDK_CONFIG_URING_ZNS 00:11:51.781 #undef SPDK_CONFIG_USDT 00:11:51.781 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:51.781 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:51.781 #undef SPDK_CONFIG_VFIO_USER 00:11:51.781 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:51.781 #define SPDK_CONFIG_VHOST 1 00:11:51.781 #define SPDK_CONFIG_VIRTIO 1 00:11:51.781 #undef SPDK_CONFIG_VTUNE 00:11:51.781 #define SPDK_CONFIG_VTUNE_DIR 00:11:51.781 #define SPDK_CONFIG_WERROR 1 00:11:51.781 #define SPDK_CONFIG_WPDK_DIR 00:11:51.781 #define SPDK_CONFIG_XNVME 1 00:11:51.781 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:51.781 07:43:41 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.781 07:43:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.781 07:43:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.781 07:43:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.781 07:43:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.781 07:43:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.781 07:43:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.781 07:43:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.781 07:43:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.781 07:43:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:51.781 07:43:41 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:51.781 07:43:41 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.782 07:43:41 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68456 ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68456 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XbvuU6 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.XbvuU6/tests/xnvme /tmp/spdk.XbvuU6 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13942595584 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5625290752 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13942595584 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5625290752 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.783 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98813972480 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=888807424 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:51.784 * Looking for test storage... 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13942595584 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 07:43:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 07:43:41 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.784 07:43:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.784 07:43:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.784 07:43:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.784 07:43:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.784 07:43:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.784 07:43:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:51.784 07:43:41 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:51.785 07:43:41 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:52.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.308 Waiting for block devices as requested 00:11:52.308 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.569 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.569 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.569 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.861 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:57.861 07:43:47 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:58.122 07:43:47 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:58.122 07:43:47 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:58.384 07:43:48 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:58.384 No valid GPT data, bailing 00:11:58.384 07:43:48 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:58.384 07:43:48 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:58.384 07:43:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:58.384 07:43:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.384 07:43:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.384 07:43:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.384 ************************************ 00:11:58.384 START TEST xnvme_rpc 00:11:58.384 ************************************ 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68839 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68839 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68839 ']' 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.384 07:43:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.384 [2024-11-29 07:43:48.292466] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:58.384 [2024-11-29 07:43:48.292769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68839 ] 00:11:58.646 [2024-11-29 07:43:48.458124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.907 [2024-11-29 07:43:48.600815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.480 xnvme_bdev 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:59.480 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68839 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68839 ']' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68839 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68839 00:11:59.743 killing process with pid 68839 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68839' 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68839 00:11:59.743 07:43:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68839 00:12:01.125 00:12:01.125 real 0m2.829s 00:12:01.125 user 0m2.765s 00:12:01.125 sys 0m0.513s 00:12:01.125 ************************************ 00:12:01.125 END TEST xnvme_rpc 00:12:01.125 ************************************ 00:12:01.125 07:43:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.125 07:43:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.384 07:43:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:01.384 07:43:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.384 07:43:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.384 07:43:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.384 ************************************ 00:12:01.384 START TEST xnvme_bdevperf 00:12:01.384 ************************************ 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:01.384 07:43:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:01.384 { 00:12:01.384 "subsystems": [ 00:12:01.384 { 00:12:01.384 "subsystem": "bdev", 00:12:01.384 "config": [ 00:12:01.384 { 00:12:01.384 "params": { 00:12:01.384 "io_mechanism": "libaio", 00:12:01.384 "conserve_cpu": false, 00:12:01.384 "filename": "/dev/nvme0n1", 00:12:01.384 "name": "xnvme_bdev" 00:12:01.384 }, 00:12:01.384 "method": "bdev_xnvme_create" 00:12:01.384 }, 00:12:01.384 { 00:12:01.384 "method": "bdev_wait_for_examine" 00:12:01.384 } 00:12:01.384 ] 00:12:01.384 } 00:12:01.384 ] 00:12:01.384 } 00:12:01.384 [2024-11-29 07:43:51.157994] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:01.384 [2024-11-29 07:43:51.158112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68913 ] 00:12:01.384 [2024-11-29 07:43:51.315518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.642 [2024-11-29 07:43:51.402991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.903 Running I/O for 5 seconds... 00:12:03.781 26364.00 IOPS, 102.98 MiB/s [2024-11-29T07:43:54.664Z] 27601.00 IOPS, 107.82 MiB/s [2024-11-29T07:43:56.047Z] 30520.00 IOPS, 119.22 MiB/s [2024-11-29T07:43:56.989Z] 29982.75 IOPS, 117.12 MiB/s 00:12:07.046 Latency(us) 00:12:07.046 [2024-11-29T07:43:56.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.046 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:07.046 xnvme_bdev : 5.01 29889.52 116.76 0.00 0.00 2136.52 393.85 9729.58 00:12:07.046 [2024-11-29T07:43:56.990Z] =================================================================================================================== 00:12:07.046 [2024-11-29T07:43:56.990Z] Total : 29889.52 116.76 0.00 0.00 2136.52 393.85 9729.58 00:12:07.622 07:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:07.622 07:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:07.622 07:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:07.622 07:43:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:07.622 07:43:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.622 { 00:12:07.622 "subsystems": [ 00:12:07.622 { 00:12:07.622 "subsystem": "bdev", 00:12:07.622 "config": [ 00:12:07.622 { 00:12:07.622 "params": { 00:12:07.622 "io_mechanism": "libaio", 00:12:07.622 "conserve_cpu": false, 00:12:07.622 "filename": "/dev/nvme0n1", 00:12:07.622 "name": "xnvme_bdev" 00:12:07.622 }, 00:12:07.622 "method": "bdev_xnvme_create" 00:12:07.622 }, 00:12:07.622 { 00:12:07.622 "method": "bdev_wait_for_examine" 00:12:07.622 } 00:12:07.622 ] 00:12:07.622 } 00:12:07.622 ] 00:12:07.622 } 00:12:07.622 [2024-11-29 07:43:57.546388] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:07.622 [2024-11-29 07:43:57.546564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68988 ] 00:12:07.883 [2024-11-29 07:43:57.711632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.883 [2024-11-29 07:43:57.816094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.462 Running I/O for 5 seconds... 00:12:10.387 31381.00 IOPS, 122.58 MiB/s [2024-11-29T07:44:01.272Z] 31127.00 IOPS, 121.59 MiB/s [2024-11-29T07:44:02.212Z] 30482.67 IOPS, 119.07 MiB/s [2024-11-29T07:44:03.150Z] 30432.00 IOPS, 118.88 MiB/s [2024-11-29T07:44:03.410Z] 30980.40 IOPS, 121.02 MiB/s 00:12:13.466 Latency(us) 00:12:13.466 [2024-11-29T07:44:03.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.466 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:13.466 xnvme_bdev : 5.01 30961.82 120.94 0.00 0.00 2062.29 463.16 6704.84 00:12:13.466 [2024-11-29T07:44:03.410Z] =================================================================================================================== 00:12:13.466 [2024-11-29T07:44:03.410Z] Total : 30961.82 120.94 0.00 0.00 2062.29 463.16 6704.84 00:12:14.038 ************************************ 00:12:14.038 END TEST xnvme_bdevperf 00:12:14.038 ************************************ 00:12:14.038 00:12:14.038 real 0m12.878s 00:12:14.038 user 0m4.887s 00:12:14.038 sys 0m6.411s 00:12:14.038 07:44:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.038 07:44:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 07:44:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:14.300 07:44:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.300 07:44:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.300 07:44:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 ************************************ 00:12:14.300 START TEST xnvme_fio_plugin 00:12:14.300 ************************************ 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:14.300 07:44:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.300 { 00:12:14.300 "subsystems": [ 00:12:14.300 { 00:12:14.300 "subsystem": "bdev", 00:12:14.300 "config": [ 00:12:14.300 { 00:12:14.300 "params": { 00:12:14.300 "io_mechanism": "libaio", 00:12:14.300 "conserve_cpu": false, 00:12:14.300 "filename": "/dev/nvme0n1", 00:12:14.300 "name": "xnvme_bdev" 00:12:14.300 }, 00:12:14.300 "method": "bdev_xnvme_create" 00:12:14.300 }, 00:12:14.300 { 00:12:14.300 "method": "bdev_wait_for_examine" 00:12:14.300 } 00:12:14.300 ] 00:12:14.300 } 00:12:14.300 ] 00:12:14.300 } 00:12:14.560 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:14.560 fio-3.35 00:12:14.560 Starting 1 thread 00:12:21.154 00:12:21.154 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69108: Fri Nov 29 07:44:09 2024 00:12:21.154 read: IOPS=31.1k, BW=121MiB/s (127MB/s)(607MiB/5002msec) 00:12:21.154 slat (usec): min=4, max=1929, avg=22.27, stdev=103.16 00:12:21.154 clat (usec): min=122, max=5586, avg=1455.75, stdev=518.13 00:12:21.154 lat (usec): min=224, max=5745, avg=1478.02, stdev=506.55 00:12:21.154 clat percentiles (usec): 00:12:21.154 | 1.00th=[ 314], 5.00th=[ 635], 10.00th=[ 807], 20.00th=[ 1045], 00:12:21.154 | 30.00th=[ 1205], 40.00th=[ 1336], 50.00th=[ 1450], 60.00th=[ 1565], 00:12:21.154 | 70.00th=[ 1680], 80.00th=[ 1811], 90.00th=[ 2057], 95.00th=[ 2311], 00:12:21.154 | 99.00th=[ 2966], 99.50th=[ 3261], 99.90th=[ 4080], 99.95th=[ 4293], 00:12:21.154 | 99.99th=[ 5342] 00:12:21.154 bw ( KiB/s): min=121272, max=129184, per=100.00%, avg=124583.11, stdev=2765.75, samples=9 00:12:21.154 iops : min=30318, max=32296, avg=31145.78, stdev=691.44, samples=9 00:12:21.154 lat (usec) : 250=0.48%, 500=2.19%, 750=5.47%, 1000=9.59% 00:12:21.154 lat (msec) : 2=70.83%, 4=11.31%, 10=0.13% 00:12:21.154 cpu : usr=42.87%, sys=48.83%, ctx=12, majf=0, minf=764 00:12:21.154 IO depths : 1=0.5%, 2=1.3%, 4=3.2%, 8=8.3%, 16=22.7%, 32=61.7%, >=64=2.1% 00:12:21.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.154 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:21.154 issued rwts: total=155409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.154 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:21.154 00:12:21.154 Run status group 0 (all jobs): 00:12:21.154 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=607MiB (637MB), run=5002-5002msec 00:12:21.154 ----------------------------------------------------- 00:12:21.154 Suppressions used: 00:12:21.154 count bytes template 00:12:21.154 1 11 /usr/src/fio/parse.c 00:12:21.154 1 8 libtcmalloc_minimal.so 00:12:21.154 1 904 libcrypto.so 00:12:21.154 ----------------------------------------------------- 00:12:21.154 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:21.154 07:44:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.154 { 00:12:21.154 "subsystems": [ 00:12:21.154 { 00:12:21.154 "subsystem": "bdev", 00:12:21.154 "config": [ 00:12:21.154 { 00:12:21.154 "params": { 00:12:21.154 "io_mechanism": "libaio", 00:12:21.154 "conserve_cpu": false, 00:12:21.154 "filename": "/dev/nvme0n1", 00:12:21.154 "name": "xnvme_bdev" 00:12:21.154 }, 00:12:21.154 "method": "bdev_xnvme_create" 00:12:21.154 }, 00:12:21.154 { 00:12:21.154 "method": "bdev_wait_for_examine" 00:12:21.154 } 00:12:21.154 ] 00:12:21.154 } 00:12:21.154 ] 00:12:21.154 } 00:12:21.416 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:21.416 fio-3.35 00:12:21.416 Starting 1 thread 00:12:28.008 00:12:28.008 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69199: Fri Nov 29 07:44:16 2024 00:12:28.008 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5001msec); 0 zone resets 00:12:28.008 slat (usec): min=4, max=1959, avg=22.77, stdev=90.16 00:12:28.008 clat (usec): min=74, max=6212, avg=1318.95, stdev=553.07 00:12:28.008 lat (usec): min=191, max=6218, avg=1341.72, stdev=546.15 00:12:28.008 clat percentiles (usec): 00:12:28.008 | 1.00th=[ 289], 5.00th=[ 523], 10.00th=[ 668], 20.00th=[ 848], 00:12:28.008 | 30.00th=[ 1004], 40.00th=[ 1139], 50.00th=[ 1270], 60.00th=[ 1401], 00:12:28.008 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 1991], 95.00th=[ 2245], 00:12:28.008 | 99.00th=[ 3064], 99.50th=[ 3294], 99.90th=[ 3949], 99.95th=[ 4228], 00:12:28.008 | 99.99th=[ 5014] 00:12:28.008 bw ( KiB/s): min=115104, max=147496, per=99.42%, avg=131395.67, stdev=8705.27, samples=9 00:12:28.008 iops : min=28776, max=36874, avg=32848.89, stdev=2176.33, samples=9 00:12:28.008 lat (usec) : 100=0.01%, 250=0.60%, 500=3.90%, 750=9.67%, 1000=15.36% 00:12:28.008 lat (msec) : 2=60.63%, 4=9.75%, 10=0.09% 00:12:28.008 cpu : usr=39.20%, sys=49.96%, ctx=11, majf=0, minf=765 00:12:28.008 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=7.9%, 16=22.8%, 32=62.9%, >=64=2.1% 00:12:28.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.008 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:28.008 issued rwts: total=0,165240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:28.008 00:12:28.008 Run status group 0 (all jobs): 00:12:28.008 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (677MB), run=5001-5001msec 00:12:28.008 ----------------------------------------------------- 00:12:28.008 Suppressions used: 00:12:28.008 count bytes template 00:12:28.008 1 11 /usr/src/fio/parse.c 00:12:28.008 1 8 libtcmalloc_minimal.so 00:12:28.008 1 904 libcrypto.so 00:12:28.008 ----------------------------------------------------- 00:12:28.008 00:12:28.008 00:12:28.008 real 0m13.853s 00:12:28.009 user 0m6.922s 00:12:28.009 sys 0m5.584s 00:12:28.009 ************************************ 00:12:28.009 END TEST xnvme_fio_plugin 00:12:28.009 ************************************ 00:12:28.009 07:44:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.009 07:44:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:28.009 07:44:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:28.009 07:44:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:28.009 07:44:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:28.009 07:44:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:28.009 07:44:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.009 07:44:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.009 07:44:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.270 ************************************ 00:12:28.270 START TEST xnvme_rpc 00:12:28.270 ************************************ 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:28.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69284 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69284 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69284 ']' 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.270 07:44:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.270 [2024-11-29 07:44:18.046333] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:28.270 [2024-11-29 07:44:18.046510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69284 ] 00:12:28.270 [2024-11-29 07:44:18.206514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.530 [2024-11-29 07:44:18.323016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.102 xnvme_bdev 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:29.102 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69284 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69284 ']' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69284 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69284 00:12:29.364 killing process with pid 69284 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69284' 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69284 00:12:29.364 07:44:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69284 00:12:31.280 00:12:31.280 real 0m2.877s 00:12:31.280 user 0m2.860s 00:12:31.280 sys 0m0.455s 00:12:31.280 ************************************ 00:12:31.280 END TEST xnvme_rpc 00:12:31.280 ************************************ 00:12:31.280 07:44:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.280 07:44:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.280 07:44:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.280 07:44:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.280 07:44:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.280 07:44:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.280 ************************************ 00:12:31.280 START TEST xnvme_bdevperf 00:12:31.280 ************************************ 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.280 07:44:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.280 { 00:12:31.280 "subsystems": [ 00:12:31.280 { 00:12:31.280 "subsystem": "bdev", 00:12:31.280 "config": [ 00:12:31.280 { 00:12:31.280 "params": { 00:12:31.280 "io_mechanism": "libaio", 00:12:31.280 "conserve_cpu": true, 00:12:31.280 "filename": "/dev/nvme0n1", 00:12:31.280 "name": "xnvme_bdev" 00:12:31.280 }, 00:12:31.280 "method": "bdev_xnvme_create" 00:12:31.280 }, 00:12:31.280 { 00:12:31.280 "method": "bdev_wait_for_examine" 00:12:31.280 } 00:12:31.280 ] 00:12:31.280 } 00:12:31.280 ] 00:12:31.280 } 00:12:31.280 [2024-11-29 07:44:20.979643] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:31.280 [2024-11-29 07:44:20.979790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69354 ] 00:12:31.280 [2024-11-29 07:44:21.144089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.541 [2024-11-29 07:44:21.271354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.802 Running I/O for 5 seconds... 00:12:33.684 32334.00 IOPS, 126.30 MiB/s [2024-11-29T07:44:25.016Z] 31375.00 IOPS, 122.56 MiB/s [2024-11-29T07:44:25.587Z] 30165.33 IOPS, 117.83 MiB/s [2024-11-29T07:44:26.972Z] 29901.00 IOPS, 116.80 MiB/s 00:12:37.028 Latency(us) 00:12:37.028 [2024-11-29T07:44:26.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.028 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.028 xnvme_bdev : 5.00 29731.88 116.14 0.00 0.00 2147.75 259.94 11645.24 00:12:37.028 [2024-11-29T07:44:26.972Z] =================================================================================================================== 00:12:37.028 [2024-11-29T07:44:26.972Z] Total : 29731.88 116.14 0.00 0.00 2147.75 259.94 11645.24 00:12:37.600 07:44:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.600 07:44:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:37.600 07:44:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.600 07:44:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.600 07:44:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.600 { 00:12:37.600 "subsystems": [ 00:12:37.600 { 00:12:37.600 "subsystem": "bdev", 00:12:37.600 "config": [ 00:12:37.600 { 00:12:37.600 "params": { 00:12:37.600 "io_mechanism": "libaio", 00:12:37.600 "conserve_cpu": true, 00:12:37.600 "filename": "/dev/nvme0n1", 00:12:37.600 "name": "xnvme_bdev" 00:12:37.600 }, 00:12:37.600 "method": "bdev_xnvme_create" 00:12:37.600 }, 00:12:37.600 { 00:12:37.600 "method": "bdev_wait_for_examine" 00:12:37.600 } 00:12:37.600 ] 00:12:37.600 } 00:12:37.600 ] 00:12:37.600 } 00:12:37.600 [2024-11-29 07:44:27.457695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:37.600 [2024-11-29 07:44:27.457835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69435 ] 00:12:37.862 [2024-11-29 07:44:27.623070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.862 [2024-11-29 07:44:27.741972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.124 Running I/O for 5 seconds... 00:12:40.135 30035.00 IOPS, 117.32 MiB/s [2024-11-29T07:44:31.467Z] 30614.00 IOPS, 119.59 MiB/s [2024-11-29T07:44:32.415Z] 30868.67 IOPS, 120.58 MiB/s [2024-11-29T07:44:33.359Z] 31136.75 IOPS, 121.63 MiB/s 00:12:43.415 Latency(us) 00:12:43.415 [2024-11-29T07:44:33.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.415 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:43.415 xnvme_bdev : 5.00 30470.83 119.03 0.00 0.00 2095.75 270.97 9931.22 00:12:43.415 [2024-11-29T07:44:33.359Z] =================================================================================================================== 00:12:43.415 [2024-11-29T07:44:33.359Z] Total : 30470.83 119.03 0.00 0.00 2095.75 270.97 9931.22 00:12:43.988 00:12:43.988 real 0m12.960s 00:12:43.988 user 0m4.809s 00:12:43.988 sys 0m6.553s 00:12:43.988 07:44:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.988 ************************************ 00:12:43.988 END TEST xnvme_bdevperf 00:12:43.988 ************************************ 00:12:43.988 07:44:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.988 07:44:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:43.988 07:44:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.988 07:44:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.988 07:44:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.250 ************************************ 00:12:44.250 START TEST xnvme_fio_plugin 00:12:44.250 ************************************ 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:44.250 07:44:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.250 { 00:12:44.250 "subsystems": [ 00:12:44.250 { 00:12:44.250 "subsystem": "bdev", 00:12:44.250 "config": [ 00:12:44.250 { 00:12:44.250 "params": { 00:12:44.250 "io_mechanism": "libaio", 00:12:44.250 "conserve_cpu": true, 00:12:44.250 "filename": "/dev/nvme0n1", 00:12:44.250 "name": "xnvme_bdev" 00:12:44.250 }, 00:12:44.250 "method": "bdev_xnvme_create" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "method": "bdev_wait_for_examine" 00:12:44.250 } 00:12:44.250 ] 00:12:44.250 } 00:12:44.250 ] 00:12:44.250 } 00:12:44.250 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:44.250 fio-3.35 00:12:44.250 Starting 1 thread 00:12:50.838 00:12:50.838 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69555: Fri Nov 29 07:44:39 2024 00:12:50.838 read: IOPS=30.3k, BW=118MiB/s (124MB/s)(592MiB/5001msec) 00:12:50.838 slat (usec): min=4, max=2609, avg=24.10, stdev=106.39 00:12:50.838 clat (usec): min=105, max=7849, avg=1456.19, stdev=547.92 00:12:50.838 lat (usec): min=197, max=7854, avg=1480.29, stdev=536.16 00:12:50.838 clat percentiles (usec): 00:12:50.838 | 1.00th=[ 285], 5.00th=[ 578], 10.00th=[ 750], 20.00th=[ 1012], 00:12:50.838 | 30.00th=[ 1188], 40.00th=[ 1336], 50.00th=[ 1467], 60.00th=[ 1582], 00:12:50.838 | 70.00th=[ 1696], 80.00th=[ 1844], 90.00th=[ 2089], 95.00th=[ 2343], 00:12:50.838 | 99.00th=[ 3032], 99.50th=[ 3425], 99.90th=[ 4080], 99.95th=[ 4228], 00:12:50.838 | 99.99th=[ 4686] 00:12:50.838 bw ( KiB/s): min=115896, max=126480, per=99.48%, avg=120659.56, stdev=3222.05, samples=9 00:12:50.838 iops : min=28974, max=31620, avg=30164.89, stdev=805.51, samples=9 00:12:50.838 lat (usec) : 250=0.66%, 500=2.77%, 750=6.63%, 1000=9.25% 00:12:50.838 lat (msec) : 2=67.93%, 4=12.62%, 10=0.13% 00:12:50.838 cpu : usr=39.22%, sys=52.28%, ctx=66, majf=0, minf=764 00:12:50.838 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.3%, 16=23.0%, 32=62.0%, >=64=2.1% 00:12:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.838 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:50.838 issued rwts: total=151646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:50.838 00:12:50.838 Run status group 0 (all jobs): 00:12:50.838 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=592MiB (621MB), run=5001-5001msec 00:12:51.099 ----------------------------------------------------- 00:12:51.099 Suppressions used: 00:12:51.099 count bytes template 00:12:51.099 1 11 /usr/src/fio/parse.c 00:12:51.099 1 8 libtcmalloc_minimal.so 00:12:51.099 1 904 libcrypto.so 00:12:51.099 ----------------------------------------------------- 00:12:51.099 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.099 07:44:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.099 { 00:12:51.099 "subsystems": [ 00:12:51.099 { 00:12:51.099 "subsystem": "bdev", 00:12:51.099 "config": [ 00:12:51.099 { 00:12:51.099 "params": { 00:12:51.099 "io_mechanism": "libaio", 00:12:51.099 "conserve_cpu": true, 00:12:51.099 "filename": "/dev/nvme0n1", 00:12:51.099 "name": "xnvme_bdev" 00:12:51.099 }, 00:12:51.099 "method": "bdev_xnvme_create" 00:12:51.099 }, 00:12:51.099 { 00:12:51.099 "method": "bdev_wait_for_examine" 00:12:51.099 } 00:12:51.099 ] 00:12:51.099 } 00:12:51.099 ] 00:12:51.099 } 00:12:51.360 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:51.360 fio-3.35 00:12:51.360 Starting 1 thread 00:12:57.949 00:12:57.950 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69647: Fri Nov 29 07:44:46 2024 00:12:57.950 write: IOPS=31.7k, BW=124MiB/s (130MB/s)(620MiB/5001msec); 0 zone resets 00:12:57.950 slat (usec): min=4, max=1932, avg=24.05, stdev=96.85 00:12:57.950 clat (usec): min=106, max=8487, avg=1366.09, stdev=573.58 00:12:57.950 lat (usec): min=183, max=8525, avg=1390.14, stdev=565.19 00:12:57.950 clat percentiles (usec): 00:12:57.950 | 1.00th=[ 277], 5.00th=[ 510], 10.00th=[ 676], 20.00th=[ 889], 00:12:57.950 | 30.00th=[ 1057], 40.00th=[ 1205], 50.00th=[ 1336], 60.00th=[ 1467], 00:12:57.950 | 70.00th=[ 1614], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2311], 00:12:57.950 | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 4146], 99.95th=[ 4686], 00:12:57.950 | 99.99th=[ 8291] 00:12:57.950 bw ( KiB/s): min=118816, max=146256, per=100.00%, avg=127406.22, stdev=8729.00, samples=9 00:12:57.950 iops : min=29704, max=36564, avg=31851.56, stdev=2182.25, samples=9 00:12:57.950 lat (usec) : 250=0.68%, 500=4.11%, 750=8.30%, 1000=13.26% 00:12:57.950 lat (msec) : 2=62.48%, 4=11.05%, 10=0.12% 00:12:57.950 cpu : usr=37.74%, sys=53.00%, ctx=11, majf=0, minf=765 00:12:57.950 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.4%, 16=23.6%, 32=61.7%, >=64=2.1% 00:12:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.950 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:57.950 issued rwts: total=0,158628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.950 00:12:57.950 Run status group 0 (all jobs): 00:12:57.950 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=620MiB (650MB), run=5001-5001msec 00:12:57.950 ----------------------------------------------------- 00:12:57.950 Suppressions used: 00:12:57.950 count bytes template 00:12:57.950 1 11 /usr/src/fio/parse.c 00:12:57.950 1 8 libtcmalloc_minimal.so 00:12:57.950 1 904 libcrypto.so 00:12:57.950 ----------------------------------------------------- 00:12:57.950 00:12:57.950 00:12:57.950 real 0m13.903s 00:12:57.950 user 0m6.706s 00:12:57.950 sys 0m5.888s 00:12:57.950 07:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.950 07:44:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.950 ************************************ 00:12:57.950 END TEST xnvme_fio_plugin 00:12:57.950 ************************************ 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:57.950 07:44:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:57.950 07:44:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.950 07:44:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.211 07:44:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.211 ************************************ 00:12:58.211 START TEST xnvme_rpc 00:12:58.211 ************************************ 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69733 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69733 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69733 ']' 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.211 07:44:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.211 [2024-11-29 07:44:47.992090] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:58.211 [2024-11-29 07:44:47.992244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69733 ] 00:12:58.472 [2024-11-29 07:44:48.154294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.472 [2024-11-29 07:44:48.276632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.045 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.045 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.045 07:44:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:59.045 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.046 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.307 xnvme_bdev 00:12:59.307 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.307 07:44:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:59.307 07:44:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.308 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.308 07:44:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.308 07:44:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69733 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69733 ']' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69733 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69733 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.308 killing process with pid 69733 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69733' 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69733 00:12:59.308 07:44:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69733 00:13:01.234 00:13:01.234 real 0m2.918s 00:13:01.234 user 0m2.889s 00:13:01.234 sys 0m0.502s 00:13:01.234 ************************************ 00:13:01.234 END TEST xnvme_rpc 00:13:01.234 ************************************ 00:13:01.234 07:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.234 07:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 07:44:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:01.234 07:44:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.234 07:44:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.234 07:44:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 ************************************ 00:13:01.234 START TEST xnvme_bdevperf 00:13:01.234 ************************************ 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.234 07:44:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 { 00:13:01.234 "subsystems": [ 00:13:01.234 { 00:13:01.234 "subsystem": "bdev", 00:13:01.234 "config": [ 00:13:01.234 { 00:13:01.234 "params": { 00:13:01.234 "io_mechanism": "io_uring", 00:13:01.234 "conserve_cpu": false, 00:13:01.234 "filename": "/dev/nvme0n1", 00:13:01.234 "name": "xnvme_bdev" 00:13:01.234 }, 00:13:01.234 "method": "bdev_xnvme_create" 00:13:01.234 }, 00:13:01.234 { 00:13:01.234 "method": "bdev_wait_for_examine" 00:13:01.234 } 00:13:01.234 ] 00:13:01.234 } 00:13:01.234 ] 00:13:01.234 } 00:13:01.234 [2024-11-29 07:44:50.963388] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:01.234 [2024-11-29 07:44:50.963547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69802 ] 00:13:01.234 [2024-11-29 07:44:51.121066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.494 [2024-11-29 07:44:51.244105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.755 Running I/O for 5 seconds... 00:13:03.635 34005.00 IOPS, 132.83 MiB/s [2024-11-29T07:44:54.965Z] 34084.50 IOPS, 133.14 MiB/s [2024-11-29T07:44:55.907Z] 34381.33 IOPS, 134.30 MiB/s [2024-11-29T07:44:56.850Z] 34689.25 IOPS, 135.50 MiB/s 00:13:06.906 Latency(us) 00:13:06.906 [2024-11-29T07:44:56.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.906 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:06.906 xnvme_bdev : 5.00 34796.58 135.92 0.00 0.00 1835.50 373.37 10687.41 00:13:06.906 [2024-11-29T07:44:56.850Z] =================================================================================================================== 00:13:06.906 [2024-11-29T07:44:56.850Z] Total : 34796.58 135.92 0.00 0.00 1835.50 373.37 10687.41 00:13:07.478 07:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.478 07:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:07.478 07:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.478 07:44:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.478 07:44:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.740 { 00:13:07.740 "subsystems": [ 00:13:07.740 { 00:13:07.740 "subsystem": "bdev", 00:13:07.740 "config": [ 00:13:07.740 { 00:13:07.740 "params": { 00:13:07.740 "io_mechanism": "io_uring", 00:13:07.740 "conserve_cpu": false, 00:13:07.740 "filename": "/dev/nvme0n1", 00:13:07.740 "name": "xnvme_bdev" 00:13:07.740 }, 00:13:07.740 "method": "bdev_xnvme_create" 00:13:07.740 }, 00:13:07.740 { 00:13:07.740 "method": "bdev_wait_for_examine" 00:13:07.740 } 00:13:07.740 ] 00:13:07.740 } 00:13:07.740 ] 00:13:07.740 } 00:13:07.740 [2024-11-29 07:44:57.491754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:07.740 [2024-11-29 07:44:57.491903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69883 ] 00:13:07.740 [2024-11-29 07:44:57.659674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.000 [2024-11-29 07:44:57.803700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.261 Running I/O for 5 seconds... 00:13:10.590 37950.00 IOPS, 148.24 MiB/s [2024-11-29T07:45:01.478Z] 36426.50 IOPS, 142.29 MiB/s [2024-11-29T07:45:02.422Z] 35914.67 IOPS, 140.29 MiB/s [2024-11-29T07:45:03.368Z] 35386.25 IOPS, 138.23 MiB/s [2024-11-29T07:45:03.368Z] 35260.20 IOPS, 137.74 MiB/s 00:13:13.424 Latency(us) 00:13:13.424 [2024-11-29T07:45:03.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.424 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:13.424 xnvme_bdev : 5.01 35226.06 137.60 0.00 0.00 1812.86 349.74 6604.01 00:13:13.424 [2024-11-29T07:45:03.368Z] =================================================================================================================== 00:13:13.424 [2024-11-29T07:45:03.368Z] Total : 35226.06 137.60 0.00 0.00 1812.86 349.74 6604.01 00:13:14.372 00:13:14.372 real 0m13.117s 00:13:14.372 user 0m6.273s 00:13:14.372 sys 0m6.571s 00:13:14.372 ************************************ 00:13:14.372 END TEST xnvme_bdevperf 00:13:14.372 ************************************ 00:13:14.372 07:45:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.372 07:45:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:14.372 07:45:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:14.372 07:45:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.372 07:45:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.372 07:45:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.372 ************************************ 00:13:14.372 START TEST xnvme_fio_plugin 00:13:14.372 ************************************ 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.372 07:45:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.372 { 00:13:14.372 "subsystems": [ 00:13:14.372 { 00:13:14.372 "subsystem": "bdev", 00:13:14.372 "config": [ 00:13:14.372 { 00:13:14.372 "params": { 00:13:14.372 "io_mechanism": "io_uring", 00:13:14.372 "conserve_cpu": false, 00:13:14.372 "filename": "/dev/nvme0n1", 00:13:14.372 "name": "xnvme_bdev" 00:13:14.372 }, 00:13:14.372 "method": "bdev_xnvme_create" 00:13:14.372 }, 00:13:14.372 { 00:13:14.372 "method": "bdev_wait_for_examine" 00:13:14.372 } 00:13:14.372 ] 00:13:14.372 } 00:13:14.372 ] 00:13:14.372 } 00:13:14.372 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.372 fio-3.35 00:13:14.372 Starting 1 thread 00:13:21.061 00:13:21.061 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69997: Fri Nov 29 07:45:10 2024 00:13:21.061 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(656MiB/5002msec) 00:13:21.061 slat (usec): min=2, max=245, avg= 3.53, stdev= 2.58 00:13:21.061 clat (usec): min=1024, max=4369, avg=1761.09, stdev=268.25 00:13:21.061 lat (usec): min=1027, max=4383, avg=1764.62, stdev=268.57 00:13:21.061 clat percentiles (usec): 00:13:21.061 | 1.00th=[ 1319], 5.00th=[ 1418], 10.00th=[ 1467], 20.00th=[ 1549], 00:13:21.061 | 30.00th=[ 1614], 40.00th=[ 1663], 50.00th=[ 1713], 60.00th=[ 1778], 00:13:21.061 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2089], 95.00th=[ 2245], 00:13:21.061 | 99.00th=[ 2606], 99.50th=[ 2835], 99.90th=[ 3195], 99.95th=[ 3326], 00:13:21.061 | 99.99th=[ 4293] 00:13:21.061 bw ( KiB/s): min=129536, max=137728, per=99.91%, avg=134200.89, stdev=3031.14, samples=9 00:13:21.061 iops : min=32384, max=34432, avg=33550.22, stdev=757.78, samples=9 00:13:21.061 lat (msec) : 2=84.49%, 4=15.47%, 10=0.04% 00:13:21.061 cpu : usr=29.07%, sys=69.05%, ctx=73, majf=0, minf=762 00:13:21.061 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:21.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.061 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:21.061 issued rwts: total=167971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.061 00:13:21.061 Run status group 0 (all jobs): 00:13:21.061 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=656MiB (688MB), run=5002-5002msec 00:13:21.322 ----------------------------------------------------- 00:13:21.322 Suppressions used: 00:13:21.322 count bytes template 00:13:21.322 1 11 /usr/src/fio/parse.c 00:13:21.322 1 8 libtcmalloc_minimal.so 00:13:21.322 1 904 libcrypto.so 00:13:21.322 ----------------------------------------------------- 00:13:21.322 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:21.322 07:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.322 { 00:13:21.322 "subsystems": [ 00:13:21.322 { 00:13:21.322 "subsystem": "bdev", 00:13:21.322 "config": [ 00:13:21.322 { 00:13:21.322 "params": { 00:13:21.322 "io_mechanism": "io_uring", 00:13:21.322 "conserve_cpu": false, 00:13:21.322 "filename": "/dev/nvme0n1", 00:13:21.322 "name": "xnvme_bdev" 00:13:21.322 }, 00:13:21.322 "method": "bdev_xnvme_create" 00:13:21.322 }, 00:13:21.322 { 00:13:21.322 "method": "bdev_wait_for_examine" 00:13:21.322 } 00:13:21.322 ] 00:13:21.322 } 00:13:21.322 ] 00:13:21.322 } 00:13:21.582 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:21.582 fio-3.35 00:13:21.582 Starting 1 thread 00:13:28.172 00:13:28.172 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70094: Fri Nov 29 07:45:17 2024 00:13:28.172 write: IOPS=34.8k, BW=136MiB/s (143MB/s)(680MiB/5001msec); 0 zone resets 00:13:28.172 slat (usec): min=2, max=110, avg= 4.08, stdev= 2.00 00:13:28.172 clat (usec): min=296, max=5434, avg=1677.42, stdev=280.69 00:13:28.172 lat (usec): min=304, max=5444, avg=1681.50, stdev=281.11 00:13:28.172 clat percentiles (usec): 00:13:28.173 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1467], 00:13:28.173 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1713], 00:13:28.173 | 70.00th=[ 1778], 80.00th=[ 1860], 90.00th=[ 1991], 95.00th=[ 2147], 00:13:28.173 | 99.00th=[ 2507], 99.50th=[ 2737], 99.90th=[ 3752], 99.95th=[ 4228], 00:13:28.173 | 99.99th=[ 5342] 00:13:28.173 bw ( KiB/s): min=126464, max=145136, per=99.45%, avg=138523.56, stdev=5507.18, samples=9 00:13:28.173 iops : min=31616, max=36284, avg=34630.89, stdev=1376.79, samples=9 00:13:28.173 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:13:28.173 lat (msec) : 2=90.16%, 4=9.70%, 10=0.07% 00:13:28.173 cpu : usr=32.20%, sys=66.52%, ctx=11, majf=0, minf=763 00:13:28.173 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:13:28.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.173 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:28.173 issued rwts: total=0,174143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.173 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:28.173 00:13:28.173 Run status group 0 (all jobs): 00:13:28.173 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=680MiB (713MB), run=5001-5001msec 00:13:28.173 ----------------------------------------------------- 00:13:28.173 Suppressions used: 00:13:28.173 count bytes template 00:13:28.173 1 11 /usr/src/fio/parse.c 00:13:28.173 1 8 libtcmalloc_minimal.so 00:13:28.173 1 904 libcrypto.so 00:13:28.173 ----------------------------------------------------- 00:13:28.173 00:13:28.433 00:13:28.433 real 0m14.065s 00:13:28.433 user 0m6.155s 00:13:28.433 sys 0m7.434s 00:13:28.433 07:45:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.433 ************************************ 00:13:28.433 END TEST xnvme_fio_plugin 00:13:28.433 ************************************ 00:13:28.434 07:45:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 07:45:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:28.434 07:45:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:28.434 07:45:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:28.434 07:45:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:28.434 07:45:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.434 07:45:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.434 07:45:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 ************************************ 00:13:28.434 START TEST xnvme_rpc 00:13:28.434 ************************************ 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70180 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70180 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70180 ']' 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.434 07:45:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:28.434 [2024-11-29 07:45:18.305606] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:28.434 [2024-11-29 07:45:18.305770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70180 ] 00:13:28.695 [2024-11-29 07:45:18.475588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.695 [2024-11-29 07:45:18.618751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.640 xnvme_bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.640 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70180 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70180 ']' 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70180 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:29.901 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70180 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.902 killing process with pid 70180 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70180' 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70180 00:13:29.902 07:45:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70180 00:13:31.817 ************************************ 00:13:31.817 END TEST xnvme_rpc 00:13:31.817 ************************************ 00:13:31.817 00:13:31.817 real 0m3.100s 00:13:31.817 user 0m3.013s 00:13:31.817 sys 0m0.578s 00:13:31.817 07:45:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.817 07:45:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.817 07:45:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:31.817 07:45:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.817 07:45:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.817 07:45:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.817 ************************************ 00:13:31.817 START TEST xnvme_bdevperf 00:13:31.817 ************************************ 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.817 07:45:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.817 { 00:13:31.817 "subsystems": [ 00:13:31.817 { 00:13:31.817 "subsystem": "bdev", 00:13:31.817 "config": [ 00:13:31.817 { 00:13:31.817 "params": { 00:13:31.817 "io_mechanism": "io_uring", 00:13:31.817 "conserve_cpu": true, 00:13:31.817 "filename": "/dev/nvme0n1", 00:13:31.817 "name": "xnvme_bdev" 00:13:31.817 }, 00:13:31.817 "method": "bdev_xnvme_create" 00:13:31.817 }, 00:13:31.817 { 00:13:31.817 "method": "bdev_wait_for_examine" 00:13:31.817 } 00:13:31.817 ] 00:13:31.817 } 00:13:31.817 ] 00:13:31.817 } 00:13:31.817 [2024-11-29 07:45:21.445638] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:31.817 [2024-11-29 07:45:21.445921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70250 ] 00:13:31.817 [2024-11-29 07:45:21.603429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.817 [2024-11-29 07:45:21.694870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.078 Running I/O for 5 seconds... 00:13:34.408 45284.00 IOPS, 176.89 MiB/s [2024-11-29T07:45:24.924Z] 40714.50 IOPS, 159.04 MiB/s [2024-11-29T07:45:26.309Z] 39616.00 IOPS, 154.75 MiB/s [2024-11-29T07:45:27.252Z] 38948.50 IOPS, 152.14 MiB/s 00:13:37.308 Latency(us) 00:13:37.308 [2024-11-29T07:45:27.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.308 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:37.308 xnvme_bdev : 5.00 38870.49 151.84 0.00 0.00 1643.01 554.54 10082.46 00:13:37.308 [2024-11-29T07:45:27.252Z] =================================================================================================================== 00:13:37.308 [2024-11-29T07:45:27.252Z] Total : 38870.49 151.84 0.00 0.00 1643.01 554.54 10082.46 00:13:37.568 07:45:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.829 07:45:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:37.829 07:45:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.829 07:45:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.829 07:45:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.829 { 00:13:37.829 "subsystems": [ 00:13:37.829 { 00:13:37.829 "subsystem": "bdev", 00:13:37.829 "config": [ 00:13:37.829 { 00:13:37.829 "params": { 00:13:37.829 "io_mechanism": "io_uring", 00:13:37.829 "conserve_cpu": true, 00:13:37.829 "filename": "/dev/nvme0n1", 00:13:37.829 "name": "xnvme_bdev" 00:13:37.829 }, 00:13:37.829 "method": "bdev_xnvme_create" 00:13:37.829 }, 00:13:37.829 { 00:13:37.829 "method": "bdev_wait_for_examine" 00:13:37.829 } 00:13:37.829 ] 00:13:37.829 } 00:13:37.829 ] 00:13:37.829 } 00:13:37.829 [2024-11-29 07:45:27.581297] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:37.829 [2024-11-29 07:45:27.581417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70325 ] 00:13:37.829 [2024-11-29 07:45:27.739179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.090 [2024-11-29 07:45:27.831973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.352 Running I/O for 5 seconds... 00:13:40.238 48407.00 IOPS, 189.09 MiB/s [2024-11-29T07:45:31.126Z] 48775.50 IOPS, 190.53 MiB/s [2024-11-29T07:45:32.069Z] 49130.33 IOPS, 191.92 MiB/s [2024-11-29T07:45:33.453Z] 49180.75 IOPS, 192.11 MiB/s 00:13:43.509 Latency(us) 00:13:43.509 [2024-11-29T07:45:33.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.509 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:43.509 xnvme_bdev : 5.00 47824.51 186.81 0.00 0.00 1335.33 737.28 4738.76 00:13:43.509 [2024-11-29T07:45:33.453Z] =================================================================================================================== 00:13:43.509 [2024-11-29T07:45:33.453Z] Total : 47824.51 186.81 0.00 0.00 1335.33 737.28 4738.76 00:13:44.080 00:13:44.080 real 0m12.555s 00:13:44.080 user 0m9.553s 00:13:44.080 sys 0m2.555s 00:13:44.080 07:45:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.080 ************************************ 00:13:44.080 END TEST xnvme_bdevperf 00:13:44.080 ************************************ 00:13:44.080 07:45:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.080 07:45:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:44.080 07:45:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.080 07:45:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.080 07:45:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.080 ************************************ 00:13:44.080 START TEST xnvme_fio_plugin 00:13:44.080 ************************************ 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.080 07:45:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.081 07:45:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.081 07:45:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.081 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.081 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.081 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.342 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.342 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.342 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.342 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.342 07:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.342 { 00:13:44.342 "subsystems": [ 00:13:44.342 { 00:13:44.342 "subsystem": "bdev", 00:13:44.342 "config": [ 00:13:44.342 { 00:13:44.342 "params": { 00:13:44.342 "io_mechanism": "io_uring", 00:13:44.342 "conserve_cpu": true, 00:13:44.342 "filename": "/dev/nvme0n1", 00:13:44.342 "name": "xnvme_bdev" 00:13:44.342 }, 00:13:44.342 "method": "bdev_xnvme_create" 00:13:44.342 }, 00:13:44.342 { 00:13:44.342 "method": "bdev_wait_for_examine" 00:13:44.342 } 00:13:44.342 ] 00:13:44.342 } 00:13:44.342 ] 00:13:44.342 } 00:13:44.342 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.342 fio-3.35 00:13:44.342 Starting 1 thread 00:13:50.944 00:13:50.944 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70444: Fri Nov 29 07:45:39 2024 00:13:50.944 read: IOPS=34.2k, BW=134MiB/s (140MB/s)(668MiB/5001msec) 00:13:50.944 slat (nsec): min=2862, max=74498, avg=3539.80, stdev=1752.18 00:13:50.944 clat (usec): min=1172, max=3324, avg=1727.45, stdev=223.95 00:13:50.944 lat (usec): min=1175, max=3365, avg=1730.99, stdev=224.27 00:13:50.944 clat percentiles (usec): 00:13:50.944 | 1.00th=[ 1336], 5.00th=[ 1418], 10.00th=[ 1467], 20.00th=[ 1532], 00:13:50.944 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1696], 60.00th=[ 1745], 00:13:50.944 | 70.00th=[ 1811], 80.00th=[ 1893], 90.00th=[ 2024], 95.00th=[ 2147], 00:13:50.944 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2704], 99.95th=[ 2802], 00:13:50.944 | 99.99th=[ 3130] 00:13:50.944 bw ( KiB/s): min=134144, max=139264, per=99.95%, avg=136704.00, stdev=1578.09, samples=9 00:13:50.944 iops : min=33536, max=34816, avg=34176.00, stdev=394.52, samples=9 00:13:50.944 lat (msec) : 2=88.35%, 4=11.65% 00:13:50.944 cpu : usr=55.20%, sys=41.26%, ctx=15, majf=0, minf=762 00:13:50.944 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:50.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.944 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:50.944 issued rwts: total=171008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.944 00:13:50.944 Run status group 0 (all jobs): 00:13:50.944 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=668MiB (700MB), run=5001-5001msec 00:13:51.205 ----------------------------------------------------- 00:13:51.205 Suppressions used: 00:13:51.205 count bytes template 00:13:51.205 1 11 /usr/src/fio/parse.c 00:13:51.205 1 8 libtcmalloc_minimal.so 00:13:51.205 1 904 libcrypto.so 00:13:51.205 ----------------------------------------------------- 00:13:51.205 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.205 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.206 07:45:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.206 { 00:13:51.206 "subsystems": [ 00:13:51.206 { 00:13:51.206 "subsystem": "bdev", 00:13:51.206 "config": [ 00:13:51.206 { 00:13:51.206 "params": { 00:13:51.206 "io_mechanism": "io_uring", 00:13:51.206 "conserve_cpu": true, 00:13:51.206 "filename": "/dev/nvme0n1", 00:13:51.206 "name": "xnvme_bdev" 00:13:51.206 }, 00:13:51.206 "method": "bdev_xnvme_create" 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "method": "bdev_wait_for_examine" 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 } 00:13:51.469 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.469 fio-3.35 00:13:51.469 Starting 1 thread 00:13:58.104 00:13:58.104 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70536: Fri Nov 29 07:45:47 2024 00:13:58.104 write: IOPS=35.0k, BW=137MiB/s (143MB/s)(683MiB/5001msec); 0 zone resets 00:13:58.104 slat (usec): min=2, max=486, avg= 3.75, stdev= 2.20 00:13:58.104 clat (usec): min=243, max=9871, avg=1680.31, stdev=274.65 00:13:58.104 lat (usec): min=246, max=9874, avg=1684.06, stdev=274.97 00:13:58.104 clat percentiles (usec): 00:13:58.104 | 1.00th=[ 1237], 5.00th=[ 1352], 10.00th=[ 1401], 20.00th=[ 1483], 00:13:58.104 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1696], 00:13:58.104 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 1991], 95.00th=[ 2114], 00:13:58.104 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 3818], 99.95th=[ 5604], 00:13:58.104 | 99.99th=[ 8160] 00:13:58.104 bw ( KiB/s): min=133357, max=148638, per=100.00%, avg=140179.89, stdev=4905.36, samples=9 00:13:58.104 iops : min=33339, max=37159, avg=35044.89, stdev=1226.28, samples=9 00:13:58.104 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:13:58.104 lat (msec) : 2=90.71%, 4=9.14%, 10=0.10% 00:13:58.104 cpu : usr=62.16%, sys=34.10%, ctx=46, majf=0, minf=763 00:13:58.104 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:58.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.104 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:58.104 issued rwts: total=0,174855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.104 00:13:58.104 Run status group 0 (all jobs): 00:13:58.104 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=683MiB (716MB), run=5001-5001msec 00:13:58.104 ----------------------------------------------------- 00:13:58.104 Suppressions used: 00:13:58.104 count bytes template 00:13:58.104 1 11 /usr/src/fio/parse.c 00:13:58.104 1 8 libtcmalloc_minimal.so 00:13:58.104 1 904 libcrypto.so 00:13:58.104 ----------------------------------------------------- 00:13:58.104 00:13:58.366 ************************************ 00:13:58.366 END TEST xnvme_fio_plugin 00:13:58.366 ************************************ 00:13:58.366 00:13:58.366 real 0m14.082s 00:13:58.366 user 0m8.896s 00:13:58.366 sys 0m4.488s 00:13:58.366 07:45:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.366 07:45:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:58.366 07:45:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:58.366 07:45:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.366 07:45:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.366 07:45:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.366 ************************************ 00:13:58.366 START TEST xnvme_rpc 00:13:58.366 ************************************ 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:58.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70622 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70622 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70622 ']' 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.366 07:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.366 [2024-11-29 07:45:48.246754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:58.366 [2024-11-29 07:45:48.247131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70622 ] 00:13:58.628 [2024-11-29 07:45:48.410179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.628 [2024-11-29 07:45:48.553300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.576 xnvme_bdev 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.576 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.577 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70622 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70622 ']' 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70622 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70622 00:13:59.839 killing process with pid 70622 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70622' 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70622 00:13:59.839 07:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70622 00:14:01.758 00:14:01.758 real 0m3.244s 00:14:01.758 user 0m3.101s 00:14:01.758 sys 0m0.633s 00:14:01.758 07:45:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.758 07:45:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.758 ************************************ 00:14:01.758 END TEST xnvme_rpc 00:14:01.758 ************************************ 00:14:01.758 07:45:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:01.758 07:45:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.758 07:45:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.758 07:45:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.758 ************************************ 00:14:01.758 START TEST xnvme_bdevperf 00:14:01.758 ************************************ 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.758 07:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.758 { 00:14:01.758 "subsystems": [ 00:14:01.758 { 00:14:01.758 "subsystem": "bdev", 00:14:01.758 "config": [ 00:14:01.758 { 00:14:01.758 "params": { 00:14:01.758 "io_mechanism": "io_uring_cmd", 00:14:01.758 "conserve_cpu": false, 00:14:01.758 "filename": "/dev/ng0n1", 00:14:01.758 "name": "xnvme_bdev" 00:14:01.758 }, 00:14:01.758 "method": "bdev_xnvme_create" 00:14:01.758 }, 00:14:01.758 { 00:14:01.758 "method": "bdev_wait_for_examine" 00:14:01.758 } 00:14:01.758 ] 00:14:01.758 } 00:14:01.758 ] 00:14:01.758 } 00:14:01.758 [2024-11-29 07:45:51.542242] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:01.758 [2024-11-29 07:45:51.542574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70702 ] 00:14:02.020 [2024-11-29 07:45:51.708477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.020 [2024-11-29 07:45:51.848741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.282 Running I/O for 5 seconds... 00:14:04.608 40000.00 IOPS, 156.25 MiB/s [2024-11-29T07:45:55.496Z] 40032.00 IOPS, 156.38 MiB/s [2024-11-29T07:45:56.437Z] 39814.00 IOPS, 155.52 MiB/s [2024-11-29T07:45:57.381Z] 40204.50 IOPS, 157.05 MiB/s [2024-11-29T07:45:57.381Z] 41491.20 IOPS, 162.07 MiB/s 00:14:07.437 Latency(us) 00:14:07.437 [2024-11-29T07:45:57.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.437 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.437 xnvme_bdev : 5.00 41489.45 162.07 0.00 0.00 1539.38 362.34 9527.93 00:14:07.437 [2024-11-29T07:45:57.381Z] =================================================================================================================== 00:14:07.437 [2024-11-29T07:45:57.381Z] Total : 41489.45 162.07 0.00 0.00 1539.38 362.34 9527.93 00:14:08.008 07:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.008 07:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:08.008 07:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:08.008 07:45:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:08.008 07:45:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.008 { 00:14:08.008 "subsystems": [ 00:14:08.008 { 00:14:08.008 "subsystem": "bdev", 00:14:08.008 "config": [ 00:14:08.008 { 00:14:08.008 "params": { 00:14:08.008 "io_mechanism": "io_uring_cmd", 00:14:08.008 "conserve_cpu": false, 00:14:08.008 "filename": "/dev/ng0n1", 00:14:08.008 "name": "xnvme_bdev" 00:14:08.008 }, 00:14:08.008 "method": "bdev_xnvme_create" 00:14:08.008 }, 00:14:08.008 { 00:14:08.008 "method": "bdev_wait_for_examine" 00:14:08.008 } 00:14:08.008 ] 00:14:08.008 } 00:14:08.008 ] 00:14:08.008 } 00:14:08.008 [2024-11-29 07:45:57.841724] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:08.008 [2024-11-29 07:45:57.841836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:14:08.270 [2024-11-29 07:45:57.995375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.270 [2024-11-29 07:45:58.083810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.531 Running I/O for 5 seconds... 00:14:10.420 44426.00 IOPS, 173.54 MiB/s [2024-11-29T07:46:01.306Z] 42828.50 IOPS, 167.30 MiB/s [2024-11-29T07:46:02.693Z] 42112.00 IOPS, 164.50 MiB/s [2024-11-29T07:46:03.634Z] 41872.00 IOPS, 163.56 MiB/s [2024-11-29T07:46:03.634Z] 41796.00 IOPS, 163.27 MiB/s 00:14:13.690 Latency(us) 00:14:13.690 [2024-11-29T07:46:03.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.691 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.691 xnvme_bdev : 5.00 41792.90 163.25 0.00 0.00 1527.89 340.28 9275.86 00:14:13.691 [2024-11-29T07:46:03.635Z] =================================================================================================================== 00:14:13.691 [2024-11-29T07:46:03.635Z] Total : 41792.90 163.25 0.00 0.00 1527.89 340.28 9275.86 00:14:14.262 07:46:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.262 07:46:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:14.262 07:46:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.262 07:46:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.262 07:46:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.524 { 00:14:14.524 "subsystems": [ 00:14:14.524 { 00:14:14.524 "subsystem": "bdev", 00:14:14.524 "config": [ 00:14:14.524 { 00:14:14.524 "params": { 00:14:14.524 "io_mechanism": "io_uring_cmd", 00:14:14.524 "conserve_cpu": false, 00:14:14.524 "filename": "/dev/ng0n1", 00:14:14.524 "name": "xnvme_bdev" 00:14:14.524 }, 00:14:14.524 "method": "bdev_xnvme_create" 00:14:14.524 }, 00:14:14.524 { 00:14:14.524 "method": "bdev_wait_for_examine" 00:14:14.524 } 00:14:14.524 ] 00:14:14.524 } 00:14:14.524 ] 00:14:14.524 } 00:14:14.524 [2024-11-29 07:46:04.248634] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:14.524 [2024-11-29 07:46:04.248780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:14:14.524 [2024-11-29 07:46:04.413076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.786 [2024-11-29 07:46:04.558191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.047 Running I/O for 5 seconds... 00:14:17.384 71168.00 IOPS, 278.00 MiB/s [2024-11-29T07:46:07.901Z] 73824.00 IOPS, 288.38 MiB/s [2024-11-29T07:46:09.290Z] 80085.33 IOPS, 312.83 MiB/s [2024-11-29T07:46:10.235Z] 77776.00 IOPS, 303.81 MiB/s 00:14:20.291 Latency(us) 00:14:20.291 [2024-11-29T07:46:10.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.291 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:20.291 xnvme_bdev : 5.00 76231.63 297.78 0.00 0.00 835.96 450.56 4058.19 00:14:20.291 [2024-11-29T07:46:10.235Z] =================================================================================================================== 00:14:20.291 [2024-11-29T07:46:10.235Z] Total : 76231.63 297.78 0.00 0.00 835.96 450.56 4058.19 00:14:20.863 07:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.863 07:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:20.863 07:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.863 07:46:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.863 07:46:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.125 { 00:14:21.125 "subsystems": [ 00:14:21.125 { 00:14:21.125 "subsystem": "bdev", 00:14:21.125 "config": [ 00:14:21.125 { 00:14:21.125 "params": { 00:14:21.125 "io_mechanism": "io_uring_cmd", 00:14:21.125 "conserve_cpu": false, 00:14:21.125 "filename": "/dev/ng0n1", 00:14:21.125 "name": "xnvme_bdev" 00:14:21.125 }, 00:14:21.125 "method": "bdev_xnvme_create" 00:14:21.125 }, 00:14:21.125 { 00:14:21.125 "method": "bdev_wait_for_examine" 00:14:21.125 } 00:14:21.125 ] 00:14:21.125 } 00:14:21.125 ] 00:14:21.125 } 00:14:21.125 [2024-11-29 07:46:10.882665] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:21.125 [2024-11-29 07:46:10.882813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70920 ] 00:14:21.125 [2024-11-29 07:46:11.045080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.387 [2024-11-29 07:46:11.187838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.649 Running I/O for 5 seconds... 00:14:23.973 40142.00 IOPS, 156.80 MiB/s [2024-11-29T07:46:14.858Z] 36893.00 IOPS, 144.11 MiB/s [2024-11-29T07:46:15.867Z] 36906.67 IOPS, 144.17 MiB/s [2024-11-29T07:46:16.809Z] 36677.50 IOPS, 143.27 MiB/s [2024-11-29T07:46:16.809Z] 36374.80 IOPS, 142.09 MiB/s 00:14:26.865 Latency(us) 00:14:26.865 [2024-11-29T07:46:16.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.865 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:26.865 xnvme_bdev : 5.00 36365.63 142.05 0.00 0.00 1755.93 130.76 70173.93 00:14:26.865 [2024-11-29T07:46:16.809Z] =================================================================================================================== 00:14:26.865 [2024-11-29T07:46:16.809Z] Total : 36365.63 142.05 0.00 0.00 1755.93 130.76 70173.93 00:14:27.806 00:14:27.806 real 0m25.937s 00:14:27.806 user 0m14.622s 00:14:27.806 sys 0m10.824s 00:14:27.806 ************************************ 00:14:27.806 END TEST xnvme_bdevperf 00:14:27.806 ************************************ 00:14:27.806 07:46:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.806 07:46:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.806 07:46:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:27.806 07:46:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.806 07:46:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.806 07:46:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.806 ************************************ 00:14:27.806 START TEST xnvme_fio_plugin 00:14:27.806 ************************************ 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:27.806 07:46:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.806 { 00:14:27.806 "subsystems": [ 00:14:27.806 { 00:14:27.806 "subsystem": "bdev", 00:14:27.806 "config": [ 00:14:27.806 { 00:14:27.806 "params": { 00:14:27.806 "io_mechanism": "io_uring_cmd", 00:14:27.806 "conserve_cpu": false, 00:14:27.806 "filename": "/dev/ng0n1", 00:14:27.806 "name": "xnvme_bdev" 00:14:27.806 }, 00:14:27.807 "method": "bdev_xnvme_create" 00:14:27.807 }, 00:14:27.807 { 00:14:27.807 "method": "bdev_wait_for_examine" 00:14:27.807 } 00:14:27.807 ] 00:14:27.807 } 00:14:27.807 ] 00:14:27.807 } 00:14:27.807 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:27.807 fio-3.35 00:14:27.807 Starting 1 thread 00:14:34.394 00:14:34.394 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71053: Fri Nov 29 07:46:23 2024 00:14:34.394 read: IOPS=38.8k, BW=152MiB/s (159MB/s)(759MiB/5001msec) 00:14:34.394 slat (nsec): min=2877, max=82450, avg=3594.55, stdev=1906.10 00:14:34.394 clat (usec): min=852, max=5607, avg=1501.82, stdev=288.91 00:14:34.394 lat (usec): min=855, max=5624, avg=1505.41, stdev=289.31 00:14:34.394 clat percentiles (usec): 00:14:34.394 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1254], 00:14:34.394 | 30.00th=[ 1336], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1532], 00:14:34.394 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1860], 95.00th=[ 2008], 00:14:34.394 | 99.00th=[ 2376], 99.50th=[ 2540], 99.90th=[ 2868], 99.95th=[ 3654], 00:14:34.394 | 99.99th=[ 5538] 00:14:34.394 bw ( KiB/s): min=141312, max=180736, per=98.79%, avg=153438.22, stdev=13314.56, samples=9 00:14:34.394 iops : min=35328, max=45184, avg=38359.44, stdev=3328.66, samples=9 00:14:34.394 lat (usec) : 1000=0.43% 00:14:34.394 lat (msec) : 2=94.51%, 4=5.02%, 10=0.03% 00:14:34.394 cpu : usr=35.82%, sys=62.82%, ctx=32, majf=0, minf=762 00:14:34.394 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:34.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.394 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:34.394 issued rwts: total=194176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:34.394 00:14:34.394 Run status group 0 (all jobs): 00:14:34.394 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=759MiB (795MB), run=5001-5001msec 00:14:34.654 ----------------------------------------------------- 00:14:34.654 Suppressions used: 00:14:34.654 count bytes template 00:14:34.654 1 11 /usr/src/fio/parse.c 00:14:34.654 1 8 libtcmalloc_minimal.so 00:14:34.654 1 904 libcrypto.so 00:14:34.654 ----------------------------------------------------- 00:14:34.654 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.655 07:46:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.655 { 00:14:34.655 "subsystems": [ 00:14:34.655 { 00:14:34.655 "subsystem": "bdev", 00:14:34.655 "config": [ 00:14:34.655 { 00:14:34.655 "params": { 00:14:34.655 "io_mechanism": "io_uring_cmd", 00:14:34.655 "conserve_cpu": false, 00:14:34.655 "filename": "/dev/ng0n1", 00:14:34.655 "name": "xnvme_bdev" 00:14:34.655 }, 00:14:34.655 "method": "bdev_xnvme_create" 00:14:34.655 }, 00:14:34.655 { 00:14:34.655 "method": "bdev_wait_for_examine" 00:14:34.655 } 00:14:34.655 ] 00:14:34.655 } 00:14:34.655 ] 00:14:34.655 } 00:14:34.915 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.915 fio-3.35 00:14:34.915 Starting 1 thread 00:14:41.500 00:14:41.500 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71138: Fri Nov 29 07:46:30 2024 00:14:41.500 write: IOPS=37.5k, BW=146MiB/s (154MB/s)(733MiB/5003msec); 0 zone resets 00:14:41.500 slat (nsec): min=2907, max=74937, avg=4071.76, stdev=2168.30 00:14:41.500 clat (usec): min=143, max=6841, avg=1547.64, stdev=321.39 00:14:41.500 lat (usec): min=147, max=6844, avg=1551.71, stdev=321.74 00:14:41.500 clat percentiles (usec): 00:14:41.500 | 1.00th=[ 816], 5.00th=[ 1090], 10.00th=[ 1188], 20.00th=[ 1303], 00:14:41.500 | 30.00th=[ 1401], 40.00th=[ 1467], 50.00th=[ 1532], 60.00th=[ 1614], 00:14:41.500 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1909], 95.00th=[ 2024], 00:14:41.500 | 99.00th=[ 2409], 99.50th=[ 2606], 99.90th=[ 3752], 99.95th=[ 4686], 00:14:41.500 | 99.99th=[ 5080] 00:14:41.500 bw ( KiB/s): min=141576, max=174960, per=100.00%, avg=151351.11, stdev=12237.18, samples=9 00:14:41.500 iops : min=35394, max=43740, avg=37838.00, stdev=3059.23, samples=9 00:14:41.500 lat (usec) : 250=0.01%, 500=0.13%, 750=0.53%, 1000=2.38% 00:14:41.500 lat (msec) : 2=91.12%, 4=5.77%, 10=0.06% 00:14:41.500 cpu : usr=37.35%, sys=61.30%, ctx=9, majf=0, minf=763 00:14:41.500 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.6%, 16=23.5%, 32=53.2%, >=64=1.7% 00:14:41.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.500 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:14:41.500 issued rwts: total=0,187602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.500 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.500 00:14:41.500 Run status group 0 (all jobs): 00:14:41.500 WRITE: bw=146MiB/s (154MB/s), 146MiB/s-146MiB/s (154MB/s-154MB/s), io=733MiB (768MB), run=5003-5003msec 00:14:41.500 ----------------------------------------------------- 00:14:41.500 Suppressions used: 00:14:41.500 count bytes template 00:14:41.500 1 11 /usr/src/fio/parse.c 00:14:41.500 1 8 libtcmalloc_minimal.so 00:14:41.500 1 904 libcrypto.so 00:14:41.500 ----------------------------------------------------- 00:14:41.500 00:14:41.500 ************************************ 00:14:41.500 END TEST xnvme_fio_plugin 00:14:41.500 ************************************ 00:14:41.500 00:14:41.500 real 0m13.960s 00:14:41.500 user 0m6.625s 00:14:41.500 sys 0m6.870s 00:14:41.500 07:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.500 07:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.762 07:46:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:41.762 07:46:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:41.762 07:46:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:41.762 07:46:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:41.762 07:46:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:41.762 07:46:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.762 07:46:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.762 ************************************ 00:14:41.762 START TEST xnvme_rpc 00:14:41.762 ************************************ 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71229 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71229 00:14:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71229 ']' 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.762 07:46:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:41.762 [2024-11-29 07:46:31.608424] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:41.762 [2024-11-29 07:46:31.608595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:14:42.024 [2024-11-29 07:46:31.775305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.024 [2024-11-29 07:46:31.919979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 xnvme_bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:42.968 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71229 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71229 ']' 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71229 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71229 00:14:42.969 killing process with pid 71229 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71229' 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71229 00:14:42.969 07:46:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71229 00:14:44.884 ************************************ 00:14:44.884 END TEST xnvme_rpc 00:14:44.884 ************************************ 00:14:44.884 00:14:44.884 real 0m3.057s 00:14:44.884 user 0m2.965s 00:14:44.884 sys 0m0.576s 00:14:44.884 07:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.884 07:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 07:46:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:44.884 07:46:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.884 07:46:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.884 07:46:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 ************************************ 00:14:44.884 START TEST xnvme_bdevperf 00:14:44.884 ************************************ 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.884 07:46:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 { 00:14:44.884 "subsystems": [ 00:14:44.884 { 00:14:44.884 "subsystem": "bdev", 00:14:44.884 "config": [ 00:14:44.884 { 00:14:44.884 "params": { 00:14:44.884 "io_mechanism": "io_uring_cmd", 00:14:44.884 "conserve_cpu": true, 00:14:44.884 "filename": "/dev/ng0n1", 00:14:44.884 "name": "xnvme_bdev" 00:14:44.884 }, 00:14:44.884 "method": "bdev_xnvme_create" 00:14:44.884 }, 00:14:44.884 { 00:14:44.884 "method": "bdev_wait_for_examine" 00:14:44.884 } 00:14:44.884 ] 00:14:44.884 } 00:14:44.884 ] 00:14:44.884 } 00:14:44.884 [2024-11-29 07:46:34.695506] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:44.884 [2024-11-29 07:46:34.695615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71303 ] 00:14:45.144 [2024-11-29 07:46:34.851432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.144 [2024-11-29 07:46:34.943955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.410 Running I/O for 5 seconds... 00:14:47.296 44001.00 IOPS, 171.88 MiB/s [2024-11-29T07:46:38.184Z] 44240.50 IOPS, 172.81 MiB/s [2024-11-29T07:46:39.571Z] 43723.00 IOPS, 170.79 MiB/s [2024-11-29T07:46:40.515Z] 43592.25 IOPS, 170.28 MiB/s 00:14:50.571 Latency(us) 00:14:50.571 [2024-11-29T07:46:40.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.571 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:50.571 xnvme_bdev : 5.00 43437.19 169.68 0.00 0.00 1470.01 652.21 4511.90 00:14:50.571 [2024-11-29T07:46:40.515Z] =================================================================================================================== 00:14:50.571 [2024-11-29T07:46:40.515Z] Total : 43437.19 169.68 0.00 0.00 1470.01 652.21 4511.90 00:14:51.143 07:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.143 07:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:51.143 07:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:51.143 07:46:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.143 07:46:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.143 { 00:14:51.143 "subsystems": [ 00:14:51.143 { 00:14:51.143 "subsystem": "bdev", 00:14:51.143 "config": [ 00:14:51.143 { 00:14:51.143 "params": { 00:14:51.143 "io_mechanism": "io_uring_cmd", 00:14:51.143 "conserve_cpu": true, 00:14:51.143 "filename": "/dev/ng0n1", 00:14:51.143 "name": "xnvme_bdev" 00:14:51.143 }, 00:14:51.143 "method": "bdev_xnvme_create" 00:14:51.143 }, 00:14:51.143 { 00:14:51.143 "method": "bdev_wait_for_examine" 00:14:51.143 } 00:14:51.143 ] 00:14:51.143 } 00:14:51.143 ] 00:14:51.143 } 00:14:51.404 [2024-11-29 07:46:41.119282] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:51.404 [2024-11-29 07:46:41.119640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:14:51.404 [2024-11-29 07:46:41.286971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.664 [2024-11-29 07:46:41.424252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.925 Running I/O for 5 seconds... 00:14:54.253 43671.00 IOPS, 170.59 MiB/s [2024-11-29T07:46:44.766Z] 42913.50 IOPS, 167.63 MiB/s [2024-11-29T07:46:46.148Z] 42454.33 IOPS, 165.84 MiB/s [2024-11-29T07:46:47.093Z] 42198.75 IOPS, 164.84 MiB/s [2024-11-29T07:46:47.093Z] 41552.60 IOPS, 162.31 MiB/s 00:14:57.149 Latency(us) 00:14:57.149 [2024-11-29T07:46:47.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.149 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:57.149 xnvme_bdev : 5.00 41538.84 162.26 0.00 0.00 1536.22 463.16 7208.96 00:14:57.149 [2024-11-29T07:46:47.093Z] =================================================================================================================== 00:14:57.149 [2024-11-29T07:46:47.093Z] Total : 41538.84 162.26 0.00 0.00 1536.22 463.16 7208.96 00:14:57.720 07:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.720 07:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:57.720 07:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.720 07:46:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.720 07:46:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.720 { 00:14:57.720 "subsystems": [ 00:14:57.720 { 00:14:57.720 "subsystem": "bdev", 00:14:57.720 "config": [ 00:14:57.720 { 00:14:57.720 "params": { 00:14:57.720 "io_mechanism": "io_uring_cmd", 00:14:57.720 "conserve_cpu": true, 00:14:57.720 "filename": "/dev/ng0n1", 00:14:57.720 "name": "xnvme_bdev" 00:14:57.720 }, 00:14:57.720 "method": "bdev_xnvme_create" 00:14:57.720 }, 00:14:57.720 { 00:14:57.720 "method": "bdev_wait_for_examine" 00:14:57.720 } 00:14:57.720 ] 00:14:57.720 } 00:14:57.720 ] 00:14:57.720 } 00:14:57.721 [2024-11-29 07:46:47.625719] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:57.721 [2024-11-29 07:46:47.625864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71446 ] 00:14:57.981 [2024-11-29 07:46:47.789893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.981 [2024-11-29 07:46:47.916668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.553 Running I/O for 5 seconds... 00:15:00.465 73984.00 IOPS, 289.00 MiB/s [2024-11-29T07:46:51.426Z] 74240.00 IOPS, 290.00 MiB/s [2024-11-29T07:46:52.371Z] 75925.33 IOPS, 296.58 MiB/s [2024-11-29T07:46:53.308Z] 76736.00 IOPS, 299.75 MiB/s [2024-11-29T07:46:53.308Z] 79552.00 IOPS, 310.75 MiB/s 00:15:03.364 Latency(us) 00:15:03.364 [2024-11-29T07:46:53.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.364 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:03.364 xnvme_bdev : 5.00 79518.60 310.62 0.00 0.00 801.28 359.19 3629.69 00:15:03.364 [2024-11-29T07:46:53.308Z] =================================================================================================================== 00:15:03.364 [2024-11-29T07:46:53.308Z] Total : 79518.60 310.62 0.00 0.00 801.28 359.19 3629.69 00:15:03.933 07:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.933 07:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:03.933 07:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:03.933 07:46:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:03.933 07:46:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.933 { 00:15:03.933 "subsystems": [ 00:15:03.933 { 00:15:03.933 "subsystem": "bdev", 00:15:03.933 "config": [ 00:15:03.933 { 00:15:03.933 "params": { 00:15:03.933 "io_mechanism": "io_uring_cmd", 00:15:03.933 "conserve_cpu": true, 00:15:03.933 "filename": "/dev/ng0n1", 00:15:03.933 "name": "xnvme_bdev" 00:15:03.933 }, 00:15:03.933 "method": "bdev_xnvme_create" 00:15:03.933 }, 00:15:03.933 { 00:15:03.933 "method": "bdev_wait_for_examine" 00:15:03.933 } 00:15:03.933 ] 00:15:03.933 } 00:15:03.933 ] 00:15:03.933 } 00:15:03.933 [2024-11-29 07:46:53.845005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:03.933 [2024-11-29 07:46:53.845121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71520 ] 00:15:04.193 [2024-11-29 07:46:54.003129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.193 [2024-11-29 07:46:54.094338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.453 Running I/O for 5 seconds... 00:15:06.772 51147.00 IOPS, 199.79 MiB/s [2024-11-29T07:46:57.653Z] 49353.50 IOPS, 192.79 MiB/s [2024-11-29T07:46:58.590Z] 45674.33 IOPS, 178.42 MiB/s [2024-11-29T07:46:59.533Z] 44675.25 IOPS, 174.51 MiB/s [2024-11-29T07:46:59.533Z] 41626.20 IOPS, 162.60 MiB/s 00:15:09.589 Latency(us) 00:15:09.589 [2024-11-29T07:46:59.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.589 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:09.589 xnvme_bdev : 5.01 41593.44 162.47 0.00 0.00 1533.46 95.70 21475.64 00:15:09.589 [2024-11-29T07:46:59.533Z] =================================================================================================================== 00:15:09.589 [2024-11-29T07:46:59.533Z] Total : 41593.44 162.47 0.00 0.00 1533.46 95.70 21475.64 00:15:10.161 00:15:10.161 real 0m25.463s 00:15:10.161 user 0m17.541s 00:15:10.161 sys 0m5.708s 00:15:10.161 ************************************ 00:15:10.161 END TEST xnvme_bdevperf 00:15:10.161 ************************************ 00:15:10.161 07:47:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.161 07:47:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 07:47:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:10.421 07:47:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:10.421 07:47:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.421 07:47:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 ************************************ 00:15:10.421 START TEST xnvme_fio_plugin 00:15:10.421 ************************************ 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:10.421 07:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:10.421 { 00:15:10.421 "subsystems": [ 00:15:10.421 { 00:15:10.421 "subsystem": "bdev", 00:15:10.421 "config": [ 00:15:10.421 { 00:15:10.421 "params": { 00:15:10.421 "io_mechanism": "io_uring_cmd", 00:15:10.421 "conserve_cpu": true, 00:15:10.421 "filename": "/dev/ng0n1", 00:15:10.421 "name": "xnvme_bdev" 00:15:10.421 }, 00:15:10.421 "method": "bdev_xnvme_create" 00:15:10.421 }, 00:15:10.421 { 00:15:10.421 "method": "bdev_wait_for_examine" 00:15:10.421 } 00:15:10.421 ] 00:15:10.422 } 00:15:10.422 ] 00:15:10.422 } 00:15:10.682 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:10.682 fio-3.35 00:15:10.682 Starting 1 thread 00:15:17.274 00:15:17.274 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71637: Fri Nov 29 07:47:06 2024 00:15:17.274 read: IOPS=41.9k, BW=164MiB/s (172MB/s)(819MiB/5001msec) 00:15:17.274 slat (nsec): min=2880, max=90368, avg=3317.39, stdev=1566.91 00:15:17.274 clat (usec): min=809, max=4273, avg=1394.28, stdev=278.05 00:15:17.274 lat (usec): min=812, max=4328, avg=1397.60, stdev=278.51 00:15:17.274 clat percentiles (usec): 00:15:17.274 | 1.00th=[ 988], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:17.274 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:15:17.274 | 70.00th=[ 1500], 80.00th=[ 1598], 90.00th=[ 1762], 95.00th=[ 1909], 00:15:17.274 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 2900], 99.95th=[ 3785], 00:15:17.274 | 99.99th=[ 4146] 00:15:17.274 bw ( KiB/s): min=144384, max=187392, per=99.84%, avg=167424.00, stdev=13220.61, samples=9 00:15:17.274 iops : min=36096, max=46848, avg=41856.00, stdev=3305.15, samples=9 00:15:17.274 lat (usec) : 1000=1.26% 00:15:17.274 lat (msec) : 2=95.66%, 4=3.04%, 10=0.03% 00:15:17.274 cpu : usr=73.36%, sys=23.96%, ctx=10, majf=0, minf=762 00:15:17.274 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:17.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.274 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:17.274 issued rwts: total=209664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.274 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:17.274 00:15:17.274 Run status group 0 (all jobs): 00:15:17.275 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=819MiB (859MB), run=5001-5001msec 00:15:17.275 ----------------------------------------------------- 00:15:17.275 Suppressions used: 00:15:17.275 count bytes template 00:15:17.275 1 11 /usr/src/fio/parse.c 00:15:17.275 1 8 libtcmalloc_minimal.so 00:15:17.275 1 904 libcrypto.so 00:15:17.275 ----------------------------------------------------- 00:15:17.275 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:17.275 07:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.275 { 00:15:17.275 "subsystems": [ 00:15:17.275 { 00:15:17.275 "subsystem": "bdev", 00:15:17.275 "config": [ 00:15:17.275 { 00:15:17.275 "params": { 00:15:17.275 "io_mechanism": "io_uring_cmd", 00:15:17.275 "conserve_cpu": true, 00:15:17.275 "filename": "/dev/ng0n1", 00:15:17.275 "name": "xnvme_bdev" 00:15:17.275 }, 00:15:17.275 "method": "bdev_xnvme_create" 00:15:17.275 }, 00:15:17.275 { 00:15:17.275 "method": "bdev_wait_for_examine" 00:15:17.275 } 00:15:17.275 ] 00:15:17.275 } 00:15:17.275 ] 00:15:17.275 } 00:15:17.537 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:17.537 fio-3.35 00:15:17.537 Starting 1 thread 00:15:24.130 00:15:24.130 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71729: Fri Nov 29 07:47:12 2024 00:15:24.130 write: IOPS=40.8k, BW=159MiB/s (167MB/s)(796MiB/5002msec); 0 zone resets 00:15:24.130 slat (usec): min=2, max=610, avg= 4.25, stdev= 3.06 00:15:24.130 clat (usec): min=467, max=5534, avg=1407.09, stdev=287.76 00:15:24.130 lat (usec): min=470, max=5537, avg=1411.34, stdev=288.39 00:15:24.130 clat percentiles (usec): 00:15:24.130 | 1.00th=[ 963], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:24.130 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1418], 00:15:24.130 | 70.00th=[ 1516], 80.00th=[ 1614], 90.00th=[ 1778], 95.00th=[ 1926], 00:15:24.130 | 99.00th=[ 2278], 99.50th=[ 2442], 99.90th=[ 3228], 99.95th=[ 3621], 00:15:24.130 | 99.99th=[ 4752] 00:15:24.130 bw ( KiB/s): min=140184, max=178136, per=99.27%, avg=161850.67, stdev=12785.87, samples=9 00:15:24.130 iops : min=35046, max=44534, avg=40462.67, stdev=3196.47, samples=9 00:15:24.130 lat (usec) : 500=0.01%, 750=0.06%, 1000=1.99% 00:15:24.130 lat (msec) : 2=94.59%, 4=3.33%, 10=0.02% 00:15:24.130 cpu : usr=58.59%, sys=35.21%, ctx=19, majf=0, minf=763 00:15:24.130 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.3%, 16=24.9%, 32=50.7%, >=64=1.7% 00:15:24.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.130 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:24.130 issued rwts: total=0,203873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.130 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:24.130 00:15:24.130 Run status group 0 (all jobs): 00:15:24.130 WRITE: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=796MiB (835MB), run=5002-5002msec 00:15:24.130 ----------------------------------------------------- 00:15:24.130 Suppressions used: 00:15:24.130 count bytes template 00:15:24.130 1 11 /usr/src/fio/parse.c 00:15:24.130 1 8 libtcmalloc_minimal.so 00:15:24.130 1 904 libcrypto.so 00:15:24.130 ----------------------------------------------------- 00:15:24.130 00:15:24.130 00:15:24.130 real 0m13.802s 00:15:24.130 user 0m9.404s 00:15:24.130 sys 0m3.629s 00:15:24.130 07:47:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.130 07:47:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:24.130 ************************************ 00:15:24.130 END TEST xnvme_fio_plugin 00:15:24.130 ************************************ 00:15:24.130 Process with pid 71229 is not found 00:15:24.130 07:47:14 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71229 00:15:24.130 07:47:14 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71229 ']' 00:15:24.130 07:47:14 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71229 00:15:24.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71229) - No such process 00:15:24.130 07:47:14 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71229 is not found' 00:15:24.130 00:15:24.130 real 3m32.746s 00:15:24.130 user 2m0.757s 00:15:24.130 sys 1m17.277s 00:15:24.130 ************************************ 00:15:24.130 END TEST nvme_xnvme 00:15:24.130 ************************************ 00:15:24.130 07:47:14 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.130 07:47:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.391 07:47:14 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:24.391 07:47:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.391 07:47:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.391 07:47:14 -- common/autotest_common.sh@10 -- # set +x 00:15:24.391 ************************************ 00:15:24.391 START TEST blockdev_xnvme 00:15:24.391 ************************************ 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:24.391 * Looking for test storage... 00:15:24.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.391 07:47:14 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.391 --rc genhtml_branch_coverage=1 00:15:24.391 --rc genhtml_function_coverage=1 00:15:24.391 --rc genhtml_legend=1 00:15:24.391 --rc geninfo_all_blocks=1 00:15:24.391 --rc geninfo_unexecuted_blocks=1 00:15:24.391 00:15:24.391 ' 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.391 --rc genhtml_branch_coverage=1 00:15:24.391 --rc genhtml_function_coverage=1 00:15:24.391 --rc genhtml_legend=1 00:15:24.391 --rc geninfo_all_blocks=1 00:15:24.391 --rc geninfo_unexecuted_blocks=1 00:15:24.391 00:15:24.391 ' 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.391 --rc genhtml_branch_coverage=1 00:15:24.391 --rc genhtml_function_coverage=1 00:15:24.391 --rc genhtml_legend=1 00:15:24.391 --rc geninfo_all_blocks=1 00:15:24.391 --rc geninfo_unexecuted_blocks=1 00:15:24.391 00:15:24.391 ' 00:15:24.391 07:47:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.391 --rc genhtml_branch_coverage=1 00:15:24.392 --rc genhtml_function_coverage=1 00:15:24.392 --rc genhtml_legend=1 00:15:24.392 --rc geninfo_all_blocks=1 00:15:24.392 --rc geninfo_unexecuted_blocks=1 00:15:24.392 00:15:24.392 ' 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71859 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71859 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71859 ']' 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.392 07:47:14 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.392 07:47:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.652 [2024-11-29 07:47:14.363711] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:24.652 [2024-11-29 07:47:14.364020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71859 ] 00:15:24.652 [2024-11-29 07:47:14.531294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.913 [2024-11-29 07:47:14.655485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.485 07:47:15 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.485 07:47:15 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:25.485 07:47:15 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:25.485 07:47:15 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:25.485 07:47:15 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:25.485 07:47:15 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:25.485 07:47:15 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:26.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:26.631 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:26.631 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:26.631 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:26.631 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:26.631 nvme0n1 00:15:26.631 nvme0n2 00:15:26.631 nvme0n3 00:15:26.631 nvme1n1 00:15:26.631 nvme2n1 00:15:26.631 nvme3n1 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:26.631 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.631 07:47:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a4aae17d-6f5b-4084-879f-2f6c05f7fd5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4aae17d-6f5b-4084-879f-2f6c05f7fd5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "860e5daa-a892-40a1-901a-8728ef73dd41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "860e5daa-a892-40a1-901a-8728ef73dd41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "cb42d6b9-1772-4cf3-8736-70e818c1656e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb42d6b9-1772-4cf3-8736-70e818c1656e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a583ff7a-8529-4459-83ea-d89cff55bd83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a583ff7a-8529-4459-83ea-d89cff55bd83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "37846024-d078-4da7-8693-bc42f53f9156"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37846024-d078-4da7-8693-bc42f53f9156",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "696419c2-71ae-4b30-9852-bc971f4f1df4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "696419c2-71ae-4b30-9852-bc971f4f1df4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:26.893 07:47:16 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71859 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71859 ']' 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71859 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71859 00:15:26.893 killing process with pid 71859 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71859' 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71859 00:15:26.893 07:47:16 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71859 00:15:28.814 07:47:18 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:28.814 07:47:18 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:28.814 07:47:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:28.814 07:47:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.814 07:47:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.814 ************************************ 00:15:28.814 START TEST bdev_hello_world 00:15:28.814 ************************************ 00:15:28.814 07:47:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:28.814 [2024-11-29 07:47:18.409556] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:28.814 [2024-11-29 07:47:18.409917] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72143 ] 00:15:28.814 [2024-11-29 07:47:18.575763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.814 [2024-11-29 07:47:18.702129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.387 [2024-11-29 07:47:19.112066] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:29.387 [2024-11-29 07:47:19.112130] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:29.387 [2024-11-29 07:47:19.112149] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:29.387 [2024-11-29 07:47:19.114320] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:29.387 [2024-11-29 07:47:19.114927] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:29.387 [2024-11-29 07:47:19.114959] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:29.387 [2024-11-29 07:47:19.115595] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:29.387 00:15:29.387 [2024-11-29 07:47:19.115641] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:30.334 00:15:30.334 real 0m1.571s 00:15:30.334 user 0m1.181s 00:15:30.334 sys 0m0.239s 00:15:30.334 07:47:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.334 ************************************ 00:15:30.334 END TEST bdev_hello_world 00:15:30.334 ************************************ 00:15:30.334 07:47:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:30.334 07:47:19 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:30.334 07:47:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.334 07:47:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.334 07:47:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.334 ************************************ 00:15:30.334 START TEST bdev_bounds 00:15:30.334 ************************************ 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:30.334 Process bdevio pid: 72184 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72184 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72184' 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72184 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72184 ']' 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.334 07:47:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:30.334 [2024-11-29 07:47:20.047800] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:30.334 [2024-11-29 07:47:20.048157] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72184 ] 00:15:30.334 [2024-11-29 07:47:20.213131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.595 [2024-11-29 07:47:20.338837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.595 [2024-11-29 07:47:20.339150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.595 [2024-11-29 07:47:20.339212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.168 07:47:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.168 07:47:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:31.168 07:47:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:31.168 I/O targets: 00:15:31.168 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:31.168 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:31.168 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:31.168 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:31.168 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:31.168 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:31.168 00:15:31.168 00:15:31.168 CUnit - A unit testing framework for C - Version 2.1-3 00:15:31.168 http://cunit.sourceforge.net/ 00:15:31.168 00:15:31.168 00:15:31.168 Suite: bdevio tests on: nvme3n1 00:15:31.168 Test: blockdev write read block ...passed 00:15:31.168 Test: blockdev write zeroes read block ...passed 00:15:31.168 Test: blockdev write zeroes read no split ...passed 00:15:31.168 Test: blockdev write zeroes read split ...passed 00:15:31.168 Test: blockdev write zeroes read split partial ...passed 00:15:31.168 Test: blockdev reset ...passed 00:15:31.168 Test: blockdev write read 8 blocks ...passed 00:15:31.168 Test: blockdev write read size > 128k ...passed 00:15:31.168 Test: blockdev write read invalid size ...passed 00:15:31.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.168 Test: blockdev write read max offset ...passed 00:15:31.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.168 Test: blockdev writev readv 8 blocks ...passed 00:15:31.168 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.168 Test: blockdev writev readv block ...passed 00:15:31.168 Test: blockdev writev readv size > 128k ...passed 00:15:31.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.169 Test: blockdev comparev and writev ...passed 00:15:31.169 Test: blockdev nvme passthru rw ...passed 00:15:31.169 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.169 Test: blockdev nvme admin passthru ...passed 00:15:31.169 Test: blockdev copy ...passed 00:15:31.169 Suite: bdevio tests on: nvme2n1 00:15:31.169 Test: blockdev write read block ...passed 00:15:31.169 Test: blockdev write zeroes read block ...passed 00:15:31.169 Test: blockdev write zeroes read no split ...passed 00:15:31.431 Test: blockdev write zeroes read split ...passed 00:15:31.431 Test: blockdev write zeroes read split partial ...passed 00:15:31.431 Test: blockdev reset ...passed 00:15:31.431 Test: blockdev write read 8 blocks ...passed 00:15:31.431 Test: blockdev write read size > 128k ...passed 00:15:31.431 Test: blockdev write read invalid size ...passed 00:15:31.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.431 Test: blockdev write read max offset ...passed 00:15:31.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.431 Test: blockdev writev readv 8 blocks ...passed 00:15:31.431 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.431 Test: blockdev writev readv block ...passed 00:15:31.431 Test: blockdev writev readv size > 128k ...passed 00:15:31.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.431 Test: blockdev comparev and writev ...passed 00:15:31.431 Test: blockdev nvme passthru rw ...passed 00:15:31.431 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.431 Test: blockdev nvme admin passthru ...passed 00:15:31.431 Test: blockdev copy ...passed 00:15:31.431 Suite: bdevio tests on: nvme1n1 00:15:31.431 Test: blockdev write read block ...passed 00:15:31.431 Test: blockdev write zeroes read block ...passed 00:15:31.431 Test: blockdev write zeroes read no split ...passed 00:15:31.431 Test: blockdev write zeroes read split ...passed 00:15:31.431 Test: blockdev write zeroes read split partial ...passed 00:15:31.431 Test: blockdev reset ...passed 00:15:31.431 Test: blockdev write read 8 blocks ...passed 00:15:31.431 Test: blockdev write read size > 128k ...passed 00:15:31.431 Test: blockdev write read invalid size ...passed 00:15:31.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.431 Test: blockdev write read max offset ...passed 00:15:31.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.431 Test: blockdev writev readv 8 blocks ...passed 00:15:31.431 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.431 Test: blockdev writev readv block ...passed 00:15:31.431 Test: blockdev writev readv size > 128k ...passed 00:15:31.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.431 Test: blockdev comparev and writev ...passed 00:15:31.431 Test: blockdev nvme passthru rw ...passed 00:15:31.431 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.431 Test: blockdev nvme admin passthru ...passed 00:15:31.431 Test: blockdev copy ...passed 00:15:31.431 Suite: bdevio tests on: nvme0n3 00:15:31.431 Test: blockdev write read block ...passed 00:15:31.431 Test: blockdev write zeroes read block ...passed 00:15:31.431 Test: blockdev write zeroes read no split ...passed 00:15:31.431 Test: blockdev write zeroes read split ...passed 00:15:31.431 Test: blockdev write zeroes read split partial ...passed 00:15:31.431 Test: blockdev reset ...passed 00:15:31.431 Test: blockdev write read 8 blocks ...passed 00:15:31.431 Test: blockdev write read size > 128k ...passed 00:15:31.431 Test: blockdev write read invalid size ...passed 00:15:31.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.431 Test: blockdev write read max offset ...passed 00:15:31.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.431 Test: blockdev writev readv 8 blocks ...passed 00:15:31.431 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.431 Test: blockdev writev readv block ...passed 00:15:31.431 Test: blockdev writev readv size > 128k ...passed 00:15:31.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.431 Test: blockdev comparev and writev ...passed 00:15:31.431 Test: blockdev nvme passthru rw ...passed 00:15:31.431 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.431 Test: blockdev nvme admin passthru ...passed 00:15:31.431 Test: blockdev copy ...passed 00:15:31.431 Suite: bdevio tests on: nvme0n2 00:15:31.431 Test: blockdev write read block ...passed 00:15:31.431 Test: blockdev write zeroes read block ...passed 00:15:31.431 Test: blockdev write zeroes read no split ...passed 00:15:31.693 Test: blockdev write zeroes read split ...passed 00:15:31.693 Test: blockdev write zeroes read split partial ...passed 00:15:31.693 Test: blockdev reset ...passed 00:15:31.693 Test: blockdev write read 8 blocks ...passed 00:15:31.694 Test: blockdev write read size > 128k ...passed 00:15:31.694 Test: blockdev write read invalid size ...passed 00:15:31.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.694 Test: blockdev write read max offset ...passed 00:15:31.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.694 Test: blockdev writev readv 8 blocks ...passed 00:15:31.694 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.694 Test: blockdev writev readv block ...passed 00:15:31.694 Test: blockdev writev readv size > 128k ...passed 00:15:31.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.694 Test: blockdev comparev and writev ...passed 00:15:31.694 Test: blockdev nvme passthru rw ...passed 00:15:31.694 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.694 Test: blockdev nvme admin passthru ...passed 00:15:31.694 Test: blockdev copy ...passed 00:15:31.694 Suite: bdevio tests on: nvme0n1 00:15:31.694 Test: blockdev write read block ...passed 00:15:31.694 Test: blockdev write zeroes read block ...passed 00:15:31.694 Test: blockdev write zeroes read no split ...passed 00:15:31.694 Test: blockdev write zeroes read split ...passed 00:15:31.694 Test: blockdev write zeroes read split partial ...passed 00:15:31.694 Test: blockdev reset ...passed 00:15:31.694 Test: blockdev write read 8 blocks ...passed 00:15:31.694 Test: blockdev write read size > 128k ...passed 00:15:31.694 Test: blockdev write read invalid size ...passed 00:15:31.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:31.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:31.694 Test: blockdev write read max offset ...passed 00:15:31.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:31.694 Test: blockdev writev readv 8 blocks ...passed 00:15:31.694 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.694 Test: blockdev writev readv block ...passed 00:15:31.694 Test: blockdev writev readv size > 128k ...passed 00:15:31.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.694 Test: blockdev comparev and writev ...passed 00:15:31.694 Test: blockdev nvme passthru rw ...passed 00:15:31.694 Test: blockdev nvme passthru vendor specific ...passed 00:15:31.694 Test: blockdev nvme admin passthru ...passed 00:15:31.694 Test: blockdev copy ...passed 00:15:31.694 00:15:31.694 Run Summary: Type Total Ran Passed Failed Inactive 00:15:31.694 suites 6 6 n/a 0 0 00:15:31.694 tests 138 138 138 0 0 00:15:31.694 asserts 780 780 780 0 n/a 00:15:31.694 00:15:31.694 Elapsed time = 1.241 seconds 00:15:31.694 0 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72184 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72184 ']' 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72184 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72184 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72184' 00:15:31.694 killing process with pid 72184 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72184 00:15:31.694 07:47:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72184 00:15:32.636 07:47:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:32.636 00:15:32.636 real 0m2.358s 00:15:32.636 user 0m5.764s 00:15:32.636 sys 0m0.396s 00:15:32.636 07:47:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.636 ************************************ 00:15:32.636 07:47:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:32.636 END TEST bdev_bounds 00:15:32.636 ************************************ 00:15:32.636 07:47:22 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:32.636 07:47:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.636 07:47:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.637 07:47:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.637 ************************************ 00:15:32.637 START TEST bdev_nbd 00:15:32.637 ************************************ 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72240 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72240 /var/tmp/spdk-nbd.sock 00:15:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72240 ']' 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.637 07:47:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:32.637 [2024-11-29 07:47:22.466348] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:32.637 [2024-11-29 07:47:22.466485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.898 [2024-11-29 07:47:22.623481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.898 [2024-11-29 07:47:22.709826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.470 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:33.731 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.732 1+0 records in 00:15:33.732 1+0 records out 00:15:33.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142752 s, 2.9 MB/s 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.732 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:34.035 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.036 1+0 records in 00:15:34.036 1+0 records out 00:15:34.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109566 s, 3.7 MB/s 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.036 07:47:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:34.324 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.325 1+0 records in 00:15:34.325 1+0 records out 00:15:34.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107096 s, 3.8 MB/s 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.325 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.587 1+0 records in 00:15:34.587 1+0 records out 00:15:34.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137322 s, 3.0 MB/s 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.587 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.849 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.849 1+0 records in 00:15:34.850 1+0 records out 00:15:34.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132938 s, 3.1 MB/s 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.850 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.111 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.111 1+0 records in 00:15:35.111 1+0 records out 00:15:35.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132188 s, 3.1 MB/s 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:35.112 07:47:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd0", 00:15:35.373 "bdev_name": "nvme0n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd1", 00:15:35.373 "bdev_name": "nvme0n2" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd2", 00:15:35.373 "bdev_name": "nvme0n3" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd3", 00:15:35.373 "bdev_name": "nvme1n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd4", 00:15:35.373 "bdev_name": "nvme2n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd5", 00:15:35.373 "bdev_name": "nvme3n1" 00:15:35.373 } 00:15:35.373 ]' 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd0", 00:15:35.373 "bdev_name": "nvme0n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd1", 00:15:35.373 "bdev_name": "nvme0n2" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd2", 00:15:35.373 "bdev_name": "nvme0n3" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd3", 00:15:35.373 "bdev_name": "nvme1n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd4", 00:15:35.373 "bdev_name": "nvme2n1" 00:15:35.373 }, 00:15:35.373 { 00:15:35.373 "nbd_device": "/dev/nbd5", 00:15:35.373 "bdev_name": "nvme3n1" 00:15:35.373 } 00:15:35.373 ]' 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.373 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.635 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.897 07:47:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.159 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.421 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:36.682 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:36.682 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.683 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.944 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:37.206 /dev/nbd0 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.206 1+0 records in 00:15:37.206 1+0 records out 00:15:37.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845978 s, 4.8 MB/s 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.206 07:47:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:37.468 /dev/nbd1 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.468 1+0 records in 00:15:37.468 1+0 records out 00:15:37.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112414 s, 3.6 MB/s 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.468 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:37.729 /dev/nbd10 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.729 1+0 records in 00:15:37.729 1+0 records out 00:15:37.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105853 s, 3.9 MB/s 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.729 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:37.729 /dev/nbd11 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.990 1+0 records in 00:15:37.990 1+0 records out 00:15:37.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131019 s, 3.1 MB/s 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:37.990 /dev/nbd12 00:15:37.990 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.252 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.253 1+0 records in 00:15:38.253 1+0 records out 00:15:38.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933636 s, 4.4 MB/s 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:38.253 07:47:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:38.253 /dev/nbd13 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.253 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.514 1+0 records in 00:15:38.514 1+0 records out 00:15:38.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136413 s, 3.0 MB/s 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd0", 00:15:38.514 "bdev_name": "nvme0n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd1", 00:15:38.514 "bdev_name": "nvme0n2" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd10", 00:15:38.514 "bdev_name": "nvme0n3" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd11", 00:15:38.514 "bdev_name": "nvme1n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd12", 00:15:38.514 "bdev_name": "nvme2n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd13", 00:15:38.514 "bdev_name": "nvme3n1" 00:15:38.514 } 00:15:38.514 ]' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd0", 00:15:38.514 "bdev_name": "nvme0n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd1", 00:15:38.514 "bdev_name": "nvme0n2" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd10", 00:15:38.514 "bdev_name": "nvme0n3" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd11", 00:15:38.514 "bdev_name": "nvme1n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd12", 00:15:38.514 "bdev_name": "nvme2n1" 00:15:38.514 }, 00:15:38.514 { 00:15:38.514 "nbd_device": "/dev/nbd13", 00:15:38.514 "bdev_name": "nvme3n1" 00:15:38.514 } 00:15:38.514 ]' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:38.514 /dev/nbd1 00:15:38.514 /dev/nbd10 00:15:38.514 /dev/nbd11 00:15:38.514 /dev/nbd12 00:15:38.514 /dev/nbd13' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:38.514 /dev/nbd1 00:15:38.514 /dev/nbd10 00:15:38.514 /dev/nbd11 00:15:38.514 /dev/nbd12 00:15:38.514 /dev/nbd13' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:38.514 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:38.775 256+0 records in 00:15:38.775 256+0 records out 00:15:38.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782599 s, 134 MB/s 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:38.775 256+0 records in 00:15:38.775 256+0 records out 00:15:38.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239749 s, 4.4 MB/s 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.775 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:39.038 256+0 records in 00:15:39.038 256+0 records out 00:15:39.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240738 s, 4.4 MB/s 00:15:39.038 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.038 07:47:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:39.300 256+0 records in 00:15:39.300 256+0 records out 00:15:39.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238556 s, 4.4 MB/s 00:15:39.300 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.300 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:39.560 256+0 records in 00:15:39.560 256+0 records out 00:15:39.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.294896 s, 3.6 MB/s 00:15:39.560 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.560 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:39.820 256+0 records in 00:15:39.820 256+0 records out 00:15:39.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787852 s, 13.3 MB/s 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:39.820 256+0 records in 00:15:39.820 256+0 records out 00:15:39.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125901 s, 8.3 MB/s 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:39.820 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:39.821 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.082 07:47:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.341 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.342 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.342 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.342 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.342 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.600 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.859 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.117 07:47:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:41.117 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.118 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.118 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:41.118 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.118 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:41.376 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:41.636 malloc_lvol_verify 00:15:41.636 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:41.895 bd458f93-4141-4fa5-9ebe-e6de90ea3ac2 00:15:41.895 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:42.154 f18942db-1c33-427d-b23e-f7c9401f5140 00:15:42.154 07:47:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:42.154 /dev/nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:42.415 mke2fs 1.47.0 (5-Feb-2023) 00:15:42.415 Discarding device blocks: 0/4096 done 00:15:42.415 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:42.415 00:15:42.415 Allocating group tables: 0/1 done 00:15:42.415 Writing inode tables: 0/1 done 00:15:42.415 Creating journal (1024 blocks): done 00:15:42.415 Writing superblocks and filesystem accounting information: 0/1 done 00:15:42.415 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72240 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72240 ']' 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72240 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.415 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72240 00:15:42.677 killing process with pid 72240 00:15:42.677 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.677 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.677 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72240' 00:15:42.677 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72240 00:15:42.677 07:47:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72240 00:15:43.250 07:47:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:43.250 00:15:43.250 real 0m10.779s 00:15:43.250 user 0m14.543s 00:15:43.250 sys 0m3.671s 00:15:43.250 07:47:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.250 07:47:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:43.250 ************************************ 00:15:43.250 END TEST bdev_nbd 00:15:43.250 ************************************ 00:15:43.512 07:47:33 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:43.512 07:47:33 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:43.512 07:47:33 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:43.512 07:47:33 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:43.512 07:47:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.512 07:47:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.512 07:47:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.512 ************************************ 00:15:43.512 START TEST bdev_fio 00:15:43.512 ************************************ 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:43.512 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:43.512 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 ************************************ 00:15:43.513 START TEST bdev_fio_rw_verify 00:15:43.513 ************************************ 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:43.513 07:47:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:43.775 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.775 fio-3.35 00:15:43.775 Starting 6 threads 00:15:56.014 00:15:56.014 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72649: Fri Nov 29 07:47:44 2024 00:15:56.014 read: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(637MiB/10003msec) 00:15:56.014 slat (usec): min=2, max=1412, avg= 6.58, stdev=13.06 00:15:56.014 clat (usec): min=78, max=11846, avg=1225.04, stdev=762.28 00:15:56.014 lat (usec): min=82, max=11877, avg=1231.62, stdev=763.05 00:15:56.014 clat percentiles (usec): 00:15:56.014 | 50.000th=[ 1123], 99.000th=[ 3523], 99.900th=[ 5014], 99.990th=[ 5866], 00:15:56.014 | 99.999th=[11863] 00:15:56.014 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(644MiB/10003msec); 0 zone resets 00:15:56.014 slat (usec): min=9, max=3824, avg=37.53, stdev=125.80 00:15:56.014 clat (usec): min=55, max=7637, avg=1389.37, stdev=812.81 00:15:56.014 lat (usec): min=83, max=7661, avg=1426.90, stdev=826.73 00:15:56.014 clat percentiles (usec): 00:15:56.014 | 50.000th=[ 1270], 99.000th=[ 3818], 99.900th=[ 5145], 99.990th=[ 7111], 00:15:56.014 | 99.999th=[ 7635] 00:15:56.014 bw ( KiB/s): min=49037, max=139129, per=100.00%, avg=66552.42, stdev=3891.38, samples=114 00:15:56.014 iops : min=12257, max=34782, avg=16637.16, stdev=972.89, samples=114 00:15:56.014 lat (usec) : 100=0.02%, 250=3.44%, 500=11.11%, 750=13.52%, 1000=11.87% 00:15:56.014 lat (msec) : 2=42.91%, 4=16.55%, 10=0.59%, 20=0.01% 00:15:56.014 cpu : usr=43.90%, sys=31.57%, ctx=5578, majf=0, minf=15880 00:15:56.014 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.014 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.014 issued rwts: total=163183,164796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:56.014 00:15:56.014 Run status group 0 (all jobs): 00:15:56.014 READ: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=637MiB (668MB), run=10003-10003msec 00:15:56.014 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=644MiB (675MB), run=10003-10003msec 00:15:56.014 ----------------------------------------------------- 00:15:56.014 Suppressions used: 00:15:56.014 count bytes template 00:15:56.014 6 48 /usr/src/fio/parse.c 00:15:56.014 1506 144576 /usr/src/fio/iolog.c 00:15:56.014 1 8 libtcmalloc_minimal.so 00:15:56.014 1 904 libcrypto.so 00:15:56.014 ----------------------------------------------------- 00:15:56.014 00:15:56.014 00:15:56.014 real 0m11.983s 00:15:56.014 user 0m27.874s 00:15:56.014 sys 0m19.245s 00:15:56.014 ************************************ 00:15:56.014 END TEST bdev_fio_rw_verify 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:56.014 ************************************ 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:56.014 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a4aae17d-6f5b-4084-879f-2f6c05f7fd5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4aae17d-6f5b-4084-879f-2f6c05f7fd5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "860e5daa-a892-40a1-901a-8728ef73dd41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "860e5daa-a892-40a1-901a-8728ef73dd41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "cb42d6b9-1772-4cf3-8736-70e818c1656e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb42d6b9-1772-4cf3-8736-70e818c1656e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a583ff7a-8529-4459-83ea-d89cff55bd83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a583ff7a-8529-4459-83ea-d89cff55bd83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "37846024-d078-4da7-8693-bc42f53f9156"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37846024-d078-4da7-8693-bc42f53f9156",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "696419c2-71ae-4b30-9852-bc971f4f1df4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "696419c2-71ae-4b30-9852-bc971f4f1df4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.015 /home/vagrant/spdk_repo/spdk 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:56.015 00:15:56.015 real 0m12.161s 00:15:56.015 user 0m27.950s 00:15:56.015 sys 0m19.318s 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.015 07:47:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:56.015 ************************************ 00:15:56.015 END TEST bdev_fio 00:15:56.015 ************************************ 00:15:56.015 07:47:45 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:56.015 07:47:45 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:56.015 07:47:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:56.015 07:47:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.015 07:47:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.015 ************************************ 00:15:56.015 START TEST bdev_verify 00:15:56.015 ************************************ 00:15:56.015 07:47:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:56.015 [2024-11-29 07:47:45.542689] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:56.015 [2024-11-29 07:47:45.542822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:15:56.015 [2024-11-29 07:47:45.708229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.015 [2024-11-29 07:47:45.827592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.015 [2024-11-29 07:47:45.827608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.598 Running I/O for 5 seconds... 00:15:58.932 25248.00 IOPS, 98.62 MiB/s [2024-11-29T07:47:49.822Z] 24912.00 IOPS, 97.31 MiB/s [2024-11-29T07:47:50.766Z] 24853.33 IOPS, 97.08 MiB/s [2024-11-29T07:47:51.710Z] 24600.00 IOPS, 96.09 MiB/s [2024-11-29T07:47:51.710Z] 24300.80 IOPS, 94.92 MiB/s 00:16:01.766 Latency(us) 00:16:01.766 [2024-11-29T07:47:51.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.766 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0x80000 00:16:01.766 nvme0n1 : 5.03 1857.71 7.26 0.00 0.00 68770.81 11695.66 73400.32 00:16:01.766 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x80000 length 0x80000 00:16:01.766 nvme0n1 : 5.05 1877.35 7.33 0.00 0.00 68065.92 14115.45 65334.35 00:16:01.766 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0x80000 00:16:01.766 nvme0n2 : 5.06 1847.72 7.22 0.00 0.00 68972.13 15022.87 67754.14 00:16:01.766 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x80000 length 0x80000 00:16:01.766 nvme0n2 : 5.05 1876.28 7.33 0.00 0.00 68007.16 16535.24 57671.68 00:16:01.766 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0x80000 00:16:01.766 nvme0n3 : 5.05 1851.71 7.23 0.00 0.00 68685.38 12754.31 68157.44 00:16:01.766 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x80000 length 0x80000 00:16:01.766 nvme0n3 : 5.05 1875.73 7.33 0.00 0.00 67925.13 11191.53 64931.05 00:16:01.766 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0xbd0bd 00:16:01.766 nvme1n1 : 5.08 2565.49 10.02 0.00 0.00 49267.00 5671.38 57671.68 00:16:01.766 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:01.766 nvme1n1 : 5.06 2669.22 10.43 0.00 0.00 47568.74 4990.82 53235.40 00:16:01.766 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0xa0000 00:16:01.766 nvme2n1 : 5.08 1888.62 7.38 0.00 0.00 67032.43 4738.76 60898.07 00:16:01.766 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0xa0000 length 0xa0000 00:16:01.766 nvme2n1 : 5.07 1944.86 7.60 0.00 0.00 65310.50 6503.19 66544.25 00:16:01.766 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x0 length 0x20000 00:16:01.766 nvme3n1 : 5.08 1862.84 7.28 0.00 0.00 67900.82 5444.53 61301.37 00:16:01.766 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.766 Verification LBA range: start 0x20000 length 0x20000 00:16:01.766 nvme3n1 : 5.06 1895.45 7.40 0.00 0.00 66888.61 5343.70 72593.72 00:16:01.766 [2024-11-29T07:47:51.710Z] =================================================================================================================== 00:16:01.766 [2024-11-29T07:47:51.710Z] Total : 24012.99 93.80 0.00 0.00 63517.73 4738.76 73400.32 00:16:02.340 00:16:02.340 real 0m6.756s 00:16:02.340 user 0m10.756s 00:16:02.340 sys 0m1.628s 00:16:02.340 07:47:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.340 ************************************ 00:16:02.340 07:47:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:02.340 END TEST bdev_verify 00:16:02.340 ************************************ 00:16:02.340 07:47:52 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:02.340 07:47:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:02.340 07:47:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.340 07:47:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 ************************************ 00:16:02.601 START TEST bdev_verify_big_io 00:16:02.601 ************************************ 00:16:02.601 07:47:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:02.601 [2024-11-29 07:47:52.365924] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:02.601 [2024-11-29 07:47:52.366064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:16:02.601 [2024-11-29 07:47:52.531978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.863 [2024-11-29 07:47:52.654346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.863 [2024-11-29 07:47:52.654507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.436 Running I/O for 5 seconds... 00:16:09.292 1562.00 IOPS, 97.62 MiB/s [2024-11-29T07:47:59.822Z] 3333.00 IOPS, 208.31 MiB/s 00:16:09.878 Latency(us) 00:16:09.878 [2024-11-29T07:47:59.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.878 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.878 Verification LBA range: start 0x0 length 0x8000 00:16:09.878 nvme0n1 : 5.93 99.79 6.24 0.00 0.00 1233084.00 6301.54 1690627.15 00:16:09.878 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.878 Verification LBA range: start 0x8000 length 0x8000 00:16:09.878 nvme0n1 : 5.66 110.17 6.89 0.00 0.00 1116969.02 66140.95 2322999.14 00:16:09.878 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.878 Verification LBA range: start 0x0 length 0x8000 00:16:09.878 nvme0n2 : 5.94 78.17 4.89 0.00 0.00 1491235.61 77836.60 1348630.06 00:16:09.878 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.878 Verification LBA range: start 0x8000 length 0x8000 00:16:09.878 nvme0n2 : 5.81 155.70 9.73 0.00 0.00 779767.73 10687.41 851766.35 00:16:09.878 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.878 Verification LBA range: start 0x0 length 0x8000 00:16:09.879 nvme0n3 : 5.97 117.90 7.37 0.00 0.00 944916.84 31053.98 1084066.26 00:16:09.879 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0x8000 length 0x8000 00:16:09.879 nvme0n3 : 5.76 134.63 8.41 0.00 0.00 870027.56 136314.88 751748.33 00:16:09.879 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0x0 length 0xbd0b 00:16:09.879 nvme1n1 : 6.06 102.90 6.43 0.00 0.00 1035315.14 28230.89 2077793.67 00:16:09.879 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:09.879 nvme1n1 : 5.81 194.56 12.16 0.00 0.00 583005.42 10485.76 603334.50 00:16:09.879 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0x0 length 0xa000 00:16:09.879 nvme2n1 : 6.13 122.68 7.67 0.00 0.00 835149.08 2180.33 2761787.86 00:16:09.879 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0xa000 length 0xa000 00:16:09.879 nvme2n1 : 5.82 154.04 9.63 0.00 0.00 729802.55 46782.62 909841.33 00:16:09.879 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0x0 length 0x2000 00:16:09.879 nvme3n1 : 6.32 212.58 13.29 0.00 0.00 463497.29 693.17 2348810.24 00:16:09.879 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.879 Verification LBA range: start 0x2000 length 0x2000 00:16:09.879 nvme3n1 : 5.82 151.18 9.45 0.00 0.00 725080.09 5797.42 2193943.63 00:16:09.879 [2024-11-29T07:47:59.823Z] =================================================================================================================== 00:16:09.879 [2024-11-29T07:47:59.823Z] Total : 1634.29 102.14 0.00 0.00 826827.59 693.17 2761787.86 00:16:10.874 00:16:10.874 real 0m8.173s 00:16:10.874 user 0m14.930s 00:16:10.874 sys 0m0.513s 00:16:10.874 07:48:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.874 ************************************ 00:16:10.874 07:48:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.874 END TEST bdev_verify_big_io 00:16:10.874 ************************************ 00:16:10.874 07:48:00 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:10.874 07:48:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:10.874 07:48:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.874 07:48:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.874 ************************************ 00:16:10.874 START TEST bdev_write_zeroes 00:16:10.874 ************************************ 00:16:10.874 07:48:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:10.874 [2024-11-29 07:48:00.613033] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:10.874 [2024-11-29 07:48:00.613182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73031 ] 00:16:10.874 [2024-11-29 07:48:00.775971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.135 [2024-11-29 07:48:00.899687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.395 Running I/O for 1 seconds... 00:16:12.782 84800.00 IOPS, 331.25 MiB/s 00:16:12.782 Latency(us) 00:16:12.782 [2024-11-29T07:48:02.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.782 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme0n1 : 1.02 13899.01 54.29 0.00 0.00 9198.76 6175.51 23592.96 00:16:12.782 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme0n2 : 1.02 13882.84 54.23 0.00 0.00 9202.12 6150.30 22383.06 00:16:12.782 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme0n3 : 1.02 13866.58 54.17 0.00 0.00 9204.45 6074.68 21677.29 00:16:12.782 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme1n1 : 1.03 14820.94 57.89 0.00 0.00 8569.29 5192.47 18148.43 00:16:12.782 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme2n1 : 1.03 13838.15 54.06 0.00 0.00 9159.22 3957.37 20467.40 00:16:12.782 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:12.782 nvme3n1 : 1.03 13822.17 53.99 0.00 0.00 9162.12 3932.16 22080.59 00:16:12.782 [2024-11-29T07:48:02.726Z] =================================================================================================================== 00:16:12.782 [2024-11-29T07:48:02.726Z] Total : 84129.68 328.63 0.00 0.00 9076.75 3932.16 23592.96 00:16:13.354 00:16:13.354 real 0m2.647s 00:16:13.354 user 0m1.948s 00:16:13.354 sys 0m0.502s 00:16:13.354 07:48:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.354 ************************************ 00:16:13.354 END TEST bdev_write_zeroes 00:16:13.354 ************************************ 00:16:13.354 07:48:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:13.354 07:48:03 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.354 07:48:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:13.354 07:48:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.354 07:48:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.354 ************************************ 00:16:13.354 START TEST bdev_json_nonenclosed 00:16:13.354 ************************************ 00:16:13.354 07:48:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.616 [2024-11-29 07:48:03.330479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:13.617 [2024-11-29 07:48:03.330626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73081 ] 00:16:13.617 [2024-11-29 07:48:03.487949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.878 [2024-11-29 07:48:03.615213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.878 [2024-11-29 07:48:03.615320] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:13.878 [2024-11-29 07:48:03.615339] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:13.878 [2024-11-29 07:48:03.615349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.878 00:16:13.878 real 0m0.554s 00:16:13.878 user 0m0.334s 00:16:13.878 sys 0m0.114s 00:16:13.878 07:48:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.878 07:48:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:13.878 ************************************ 00:16:13.878 END TEST bdev_json_nonenclosed 00:16:13.878 ************************************ 00:16:14.140 07:48:03 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.140 07:48:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:14.140 07:48:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.140 07:48:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 ************************************ 00:16:14.140 START TEST bdev_json_nonarray 00:16:14.140 ************************************ 00:16:14.140 07:48:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.140 [2024-11-29 07:48:03.963556] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:14.140 [2024-11-29 07:48:03.963706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73112 ] 00:16:14.401 [2024-11-29 07:48:04.131364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.401 [2024-11-29 07:48:04.249565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.401 [2024-11-29 07:48:04.249680] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:14.401 [2024-11-29 07:48:04.249712] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:14.401 [2024-11-29 07:48:04.249723] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:14.663 00:16:14.663 real 0m0.569s 00:16:14.663 user 0m0.325s 00:16:14.663 sys 0m0.134s 00:16:14.663 07:48:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.663 ************************************ 00:16:14.663 END TEST bdev_json_nonarray 00:16:14.663 ************************************ 00:16:14.663 07:48:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:14.663 07:48:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:15.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.382 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.382 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.382 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.382 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.382 00:16:23.382 real 0m58.409s 00:16:23.382 user 1m22.809s 00:16:23.382 sys 0m38.842s 00:16:23.382 ************************************ 00:16:23.382 END TEST blockdev_xnvme 00:16:23.382 ************************************ 00:16:23.382 07:48:12 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.382 07:48:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 07:48:12 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:23.382 07:48:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.382 07:48:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.382 07:48:12 -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 ************************************ 00:16:23.382 START TEST ublk 00:16:23.382 ************************************ 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:23.382 * Looking for test storage... 00:16:23.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.382 07:48:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.382 07:48:12 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.382 07:48:12 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.382 07:48:12 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.382 07:48:12 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.382 07:48:12 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:23.382 07:48:12 ublk -- scripts/common.sh@345 -- # : 1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.382 07:48:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.382 07:48:12 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@353 -- # local d=1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.382 07:48:12 ublk -- scripts/common.sh@355 -- # echo 1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.382 07:48:12 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@353 -- # local d=2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.382 07:48:12 ublk -- scripts/common.sh@355 -- # echo 2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.382 07:48:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.382 07:48:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.382 07:48:12 ublk -- scripts/common.sh@368 -- # return 0 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.382 --rc genhtml_branch_coverage=1 00:16:23.382 --rc genhtml_function_coverage=1 00:16:23.382 --rc genhtml_legend=1 00:16:23.382 --rc geninfo_all_blocks=1 00:16:23.382 --rc geninfo_unexecuted_blocks=1 00:16:23.382 00:16:23.382 ' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.382 --rc genhtml_branch_coverage=1 00:16:23.382 --rc genhtml_function_coverage=1 00:16:23.382 --rc genhtml_legend=1 00:16:23.382 --rc geninfo_all_blocks=1 00:16:23.382 --rc geninfo_unexecuted_blocks=1 00:16:23.382 00:16:23.382 ' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.382 --rc genhtml_branch_coverage=1 00:16:23.382 --rc genhtml_function_coverage=1 00:16:23.382 --rc genhtml_legend=1 00:16:23.382 --rc geninfo_all_blocks=1 00:16:23.382 --rc geninfo_unexecuted_blocks=1 00:16:23.382 00:16:23.382 ' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:23.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.382 --rc genhtml_branch_coverage=1 00:16:23.382 --rc genhtml_function_coverage=1 00:16:23.382 --rc genhtml_legend=1 00:16:23.382 --rc geninfo_all_blocks=1 00:16:23.382 --rc geninfo_unexecuted_blocks=1 00:16:23.382 00:16:23.382 ' 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:23.382 07:48:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:23.382 07:48:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:23.382 07:48:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:23.382 07:48:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:23.382 07:48:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:23.382 07:48:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:23.382 07:48:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:23.382 07:48:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:23.382 07:48:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.382 07:48:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 ************************************ 00:16:23.382 START TEST test_save_ublk_config 00:16:23.382 ************************************ 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73397 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73397 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73397 ']' 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.382 07:48:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 [2024-11-29 07:48:12.838758] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:23.382 [2024-11-29 07:48:12.838905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73397 ] 00:16:23.382 [2024-11-29 07:48:13.002901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.382 [2024-11-29 07:48:13.124840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.954 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.954 [2024-11-29 07:48:13.851470] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.954 [2024-11-29 07:48:13.852371] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:24.216 malloc0 00:16:24.216 [2024-11-29 07:48:13.923610] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:24.216 [2024-11-29 07:48:13.923711] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:24.216 [2024-11-29 07:48:13.923723] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:24.216 [2024-11-29 07:48:13.923731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:24.216 [2024-11-29 07:48:13.932574] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:24.216 [2024-11-29 07:48:13.932605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:24.216 [2024-11-29 07:48:13.939483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:24.216 [2024-11-29 07:48:13.939608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:24.216 [2024-11-29 07:48:13.956483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:24.216 0 00:16:24.216 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.216 07:48:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:24.216 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.216 07:48:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.477 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.477 07:48:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:24.477 "subsystems": [ 00:16:24.477 { 00:16:24.477 "subsystem": "fsdev", 00:16:24.477 "config": [ 00:16:24.477 { 00:16:24.477 "method": "fsdev_set_opts", 00:16:24.477 "params": { 00:16:24.477 "fsdev_io_pool_size": 65535, 00:16:24.477 "fsdev_io_cache_size": 256 00:16:24.477 } 00:16:24.477 } 00:16:24.477 ] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "keyring", 00:16:24.477 "config": [] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "iobuf", 00:16:24.477 "config": [ 00:16:24.477 { 00:16:24.477 "method": "iobuf_set_options", 00:16:24.477 "params": { 00:16:24.477 "small_pool_count": 8192, 00:16:24.477 "large_pool_count": 1024, 00:16:24.477 "small_bufsize": 8192, 00:16:24.477 "large_bufsize": 135168, 00:16:24.477 "enable_numa": false 00:16:24.477 } 00:16:24.477 } 00:16:24.477 ] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "sock", 00:16:24.477 "config": [ 00:16:24.477 { 00:16:24.477 "method": "sock_set_default_impl", 00:16:24.477 "params": { 00:16:24.477 "impl_name": "posix" 00:16:24.477 } 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "method": "sock_impl_set_options", 00:16:24.477 "params": { 00:16:24.477 "impl_name": "ssl", 00:16:24.477 "recv_buf_size": 4096, 00:16:24.477 "send_buf_size": 4096, 00:16:24.477 "enable_recv_pipe": true, 00:16:24.477 "enable_quickack": false, 00:16:24.477 "enable_placement_id": 0, 00:16:24.477 "enable_zerocopy_send_server": true, 00:16:24.477 "enable_zerocopy_send_client": false, 00:16:24.477 "zerocopy_threshold": 0, 00:16:24.477 "tls_version": 0, 00:16:24.477 "enable_ktls": false 00:16:24.477 } 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "method": "sock_impl_set_options", 00:16:24.477 "params": { 00:16:24.477 "impl_name": "posix", 00:16:24.477 "recv_buf_size": 2097152, 00:16:24.477 "send_buf_size": 2097152, 00:16:24.477 "enable_recv_pipe": true, 00:16:24.477 "enable_quickack": false, 00:16:24.477 "enable_placement_id": 0, 00:16:24.477 "enable_zerocopy_send_server": true, 00:16:24.477 "enable_zerocopy_send_client": false, 00:16:24.477 "zerocopy_threshold": 0, 00:16:24.477 "tls_version": 0, 00:16:24.477 "enable_ktls": false 00:16:24.477 } 00:16:24.477 } 00:16:24.477 ] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "vmd", 00:16:24.477 "config": [] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "accel", 00:16:24.477 "config": [ 00:16:24.477 { 00:16:24.477 "method": "accel_set_options", 00:16:24.477 "params": { 00:16:24.477 "small_cache_size": 128, 00:16:24.477 "large_cache_size": 16, 00:16:24.477 "task_count": 2048, 00:16:24.477 "sequence_count": 2048, 00:16:24.477 "buf_count": 2048 00:16:24.477 } 00:16:24.477 } 00:16:24.477 ] 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "subsystem": "bdev", 00:16:24.477 "config": [ 00:16:24.477 { 00:16:24.477 "method": "bdev_set_options", 00:16:24.477 "params": { 00:16:24.477 "bdev_io_pool_size": 65535, 00:16:24.477 "bdev_io_cache_size": 256, 00:16:24.477 "bdev_auto_examine": true, 00:16:24.477 "iobuf_small_cache_size": 128, 00:16:24.477 "iobuf_large_cache_size": 16 00:16:24.477 } 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "method": "bdev_raid_set_options", 00:16:24.477 "params": { 00:16:24.477 "process_window_size_kb": 1024, 00:16:24.477 "process_max_bandwidth_mb_sec": 0 00:16:24.477 } 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "method": "bdev_iscsi_set_options", 00:16:24.477 "params": { 00:16:24.477 "timeout_sec": 30 00:16:24.477 } 00:16:24.477 }, 00:16:24.477 { 00:16:24.477 "method": "bdev_nvme_set_options", 00:16:24.477 "params": { 00:16:24.477 "action_on_timeout": "none", 00:16:24.477 "timeout_us": 0, 00:16:24.477 "timeout_admin_us": 0, 00:16:24.477 "keep_alive_timeout_ms": 10000, 00:16:24.477 "arbitration_burst": 0, 00:16:24.477 "low_priority_weight": 0, 00:16:24.477 "medium_priority_weight": 0, 00:16:24.477 "high_priority_weight": 0, 00:16:24.477 "nvme_adminq_poll_period_us": 10000, 00:16:24.477 "nvme_ioq_poll_period_us": 0, 00:16:24.477 "io_queue_requests": 0, 00:16:24.477 "delay_cmd_submit": true, 00:16:24.477 "transport_retry_count": 4, 00:16:24.477 "bdev_retry_count": 3, 00:16:24.477 "transport_ack_timeout": 0, 00:16:24.477 "ctrlr_loss_timeout_sec": 0, 00:16:24.477 "reconnect_delay_sec": 0, 00:16:24.477 "fast_io_fail_timeout_sec": 0, 00:16:24.477 "disable_auto_failback": false, 00:16:24.477 "generate_uuids": false, 00:16:24.477 "transport_tos": 0, 00:16:24.477 "nvme_error_stat": false, 00:16:24.477 "rdma_srq_size": 0, 00:16:24.477 "io_path_stat": false, 00:16:24.477 "allow_accel_sequence": false, 00:16:24.478 "rdma_max_cq_size": 0, 00:16:24.478 "rdma_cm_event_timeout_ms": 0, 00:16:24.478 "dhchap_digests": [ 00:16:24.478 "sha256", 00:16:24.478 "sha384", 00:16:24.478 "sha512" 00:16:24.478 ], 00:16:24.478 "dhchap_dhgroups": [ 00:16:24.478 "null", 00:16:24.478 "ffdhe2048", 00:16:24.478 "ffdhe3072", 00:16:24.478 "ffdhe4096", 00:16:24.478 "ffdhe6144", 00:16:24.478 "ffdhe8192" 00:16:24.478 ] 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "bdev_nvme_set_hotplug", 00:16:24.478 "params": { 00:16:24.478 "period_us": 100000, 00:16:24.478 "enable": false 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "bdev_malloc_create", 00:16:24.478 "params": { 00:16:24.478 "name": "malloc0", 00:16:24.478 "num_blocks": 8192, 00:16:24.478 "block_size": 4096, 00:16:24.478 "physical_block_size": 4096, 00:16:24.478 "uuid": "ce506bac-6d6a-466e-9483-dd474e9a28f9", 00:16:24.478 "optimal_io_boundary": 0, 00:16:24.478 "md_size": 0, 00:16:24.478 "dif_type": 0, 00:16:24.478 "dif_is_head_of_md": false, 00:16:24.478 "dif_pi_format": 0 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "bdev_wait_for_examine" 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "scsi", 00:16:24.478 "config": null 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "scheduler", 00:16:24.478 "config": [ 00:16:24.478 { 00:16:24.478 "method": "framework_set_scheduler", 00:16:24.478 "params": { 00:16:24.478 "name": "static" 00:16:24.478 } 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "vhost_scsi", 00:16:24.478 "config": [] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "vhost_blk", 00:16:24.478 "config": [] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "ublk", 00:16:24.478 "config": [ 00:16:24.478 { 00:16:24.478 "method": "ublk_create_target", 00:16:24.478 "params": { 00:16:24.478 "cpumask": "1" 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "ublk_start_disk", 00:16:24.478 "params": { 00:16:24.478 "bdev_name": "malloc0", 00:16:24.478 "ublk_id": 0, 00:16:24.478 "num_queues": 1, 00:16:24.478 "queue_depth": 128 00:16:24.478 } 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "nbd", 00:16:24.478 "config": [] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "nvmf", 00:16:24.478 "config": [ 00:16:24.478 { 00:16:24.478 "method": "nvmf_set_config", 00:16:24.478 "params": { 00:16:24.478 "discovery_filter": "match_any", 00:16:24.478 "admin_cmd_passthru": { 00:16:24.478 "identify_ctrlr": false 00:16:24.478 }, 00:16:24.478 "dhchap_digests": [ 00:16:24.478 "sha256", 00:16:24.478 "sha384", 00:16:24.478 "sha512" 00:16:24.478 ], 00:16:24.478 "dhchap_dhgroups": [ 00:16:24.478 "null", 00:16:24.478 "ffdhe2048", 00:16:24.478 "ffdhe3072", 00:16:24.478 "ffdhe4096", 00:16:24.478 "ffdhe6144", 00:16:24.478 "ffdhe8192" 00:16:24.478 ] 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "nvmf_set_max_subsystems", 00:16:24.478 "params": { 00:16:24.478 "max_subsystems": 1024 00:16:24.478 } 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "method": "nvmf_set_crdt", 00:16:24.478 "params": { 00:16:24.478 "crdt1": 0, 00:16:24.478 "crdt2": 0, 00:16:24.478 "crdt3": 0 00:16:24.478 } 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 }, 00:16:24.478 { 00:16:24.478 "subsystem": "iscsi", 00:16:24.478 "config": [ 00:16:24.478 { 00:16:24.478 "method": "iscsi_set_options", 00:16:24.478 "params": { 00:16:24.478 "node_base": "iqn.2016-06.io.spdk", 00:16:24.478 "max_sessions": 128, 00:16:24.478 "max_connections_per_session": 2, 00:16:24.478 "max_queue_depth": 64, 00:16:24.478 "default_time2wait": 2, 00:16:24.478 "default_time2retain": 20, 00:16:24.478 "first_burst_length": 8192, 00:16:24.478 "immediate_data": true, 00:16:24.478 "allow_duplicated_isid": false, 00:16:24.478 "error_recovery_level": 0, 00:16:24.478 "nop_timeout": 60, 00:16:24.478 "nop_in_interval": 30, 00:16:24.478 "disable_chap": false, 00:16:24.478 "require_chap": false, 00:16:24.478 "mutual_chap": false, 00:16:24.478 "chap_group": 0, 00:16:24.478 "max_large_datain_per_connection": 64, 00:16:24.478 "max_r2t_per_connection": 4, 00:16:24.478 "pdu_pool_size": 36864, 00:16:24.478 "immediate_data_pool_size": 16384, 00:16:24.478 "data_out_pool_size": 2048 00:16:24.478 } 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 } 00:16:24.478 ] 00:16:24.478 }' 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73397 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73397 ']' 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73397 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73397 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.478 killing process with pid 73397 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73397' 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73397 00:16:24.478 07:48:14 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73397 00:16:25.862 [2024-11-29 07:48:15.383231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:25.862 [2024-11-29 07:48:15.420508] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:25.862 [2024-11-29 07:48:15.420639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:25.862 [2024-11-29 07:48:15.428492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:25.862 [2024-11-29 07:48:15.428558] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:25.862 [2024-11-29 07:48:15.428572] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:25.862 [2024-11-29 07:48:15.428601] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:25.862 [2024-11-29 07:48:15.428771] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73459 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73459 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73459 ']' 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:27.249 07:48:16 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:27.249 "subsystems": [ 00:16:27.249 { 00:16:27.249 "subsystem": "fsdev", 00:16:27.249 "config": [ 00:16:27.249 { 00:16:27.249 "method": "fsdev_set_opts", 00:16:27.249 "params": { 00:16:27.249 "fsdev_io_pool_size": 65535, 00:16:27.249 "fsdev_io_cache_size": 256 00:16:27.249 } 00:16:27.249 } 00:16:27.249 ] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "keyring", 00:16:27.249 "config": [] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "iobuf", 00:16:27.249 "config": [ 00:16:27.249 { 00:16:27.249 "method": "iobuf_set_options", 00:16:27.249 "params": { 00:16:27.249 "small_pool_count": 8192, 00:16:27.249 "large_pool_count": 1024, 00:16:27.249 "small_bufsize": 8192, 00:16:27.249 "large_bufsize": 135168, 00:16:27.249 "enable_numa": false 00:16:27.249 } 00:16:27.249 } 00:16:27.249 ] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "sock", 00:16:27.249 "config": [ 00:16:27.249 { 00:16:27.249 "method": "sock_set_default_impl", 00:16:27.249 "params": { 00:16:27.249 "impl_name": "posix" 00:16:27.249 } 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "method": "sock_impl_set_options", 00:16:27.249 "params": { 00:16:27.249 "impl_name": "ssl", 00:16:27.249 "recv_buf_size": 4096, 00:16:27.249 "send_buf_size": 4096, 00:16:27.249 "enable_recv_pipe": true, 00:16:27.249 "enable_quickack": false, 00:16:27.249 "enable_placement_id": 0, 00:16:27.249 "enable_zerocopy_send_server": true, 00:16:27.249 "enable_zerocopy_send_client": false, 00:16:27.249 "zerocopy_threshold": 0, 00:16:27.249 "tls_version": 0, 00:16:27.249 "enable_ktls": false 00:16:27.249 } 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "method": "sock_impl_set_options", 00:16:27.249 "params": { 00:16:27.249 "impl_name": "posix", 00:16:27.249 "recv_buf_size": 2097152, 00:16:27.249 "send_buf_size": 2097152, 00:16:27.249 "enable_recv_pipe": true, 00:16:27.249 "enable_quickack": false, 00:16:27.249 "enable_placement_id": 0, 00:16:27.249 "enable_zerocopy_send_server": true, 00:16:27.249 "enable_zerocopy_send_client": false, 00:16:27.249 "zerocopy_threshold": 0, 00:16:27.249 "tls_version": 0, 00:16:27.249 "enable_ktls": false 00:16:27.249 } 00:16:27.249 } 00:16:27.249 ] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "vmd", 00:16:27.249 "config": [] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "accel", 00:16:27.249 "config": [ 00:16:27.249 { 00:16:27.249 "method": "accel_set_options", 00:16:27.249 "params": { 00:16:27.249 "small_cache_size": 128, 00:16:27.249 "large_cache_size": 16, 00:16:27.249 "task_count": 2048, 00:16:27.249 "sequence_count": 2048, 00:16:27.249 "buf_count": 2048 00:16:27.249 } 00:16:27.249 } 00:16:27.249 ] 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "subsystem": "bdev", 00:16:27.249 "config": [ 00:16:27.249 { 00:16:27.249 "method": "bdev_set_options", 00:16:27.249 "params": { 00:16:27.249 "bdev_io_pool_size": 65535, 00:16:27.249 "bdev_io_cache_size": 256, 00:16:27.249 "bdev_auto_examine": true, 00:16:27.249 "iobuf_small_cache_size": 128, 00:16:27.249 "iobuf_large_cache_size": 16 00:16:27.249 } 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "method": "bdev_raid_set_options", 00:16:27.249 "params": { 00:16:27.249 "process_window_size_kb": 1024, 00:16:27.249 "process_max_bandwidth_mb_sec": 0 00:16:27.249 } 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "method": "bdev_iscsi_set_options", 00:16:27.249 "params": { 00:16:27.249 "timeout_sec": 30 00:16:27.249 } 00:16:27.249 }, 00:16:27.249 { 00:16:27.249 "method": "bdev_nvme_set_options", 00:16:27.249 "params": { 00:16:27.249 "action_on_timeout": "none", 00:16:27.249 "timeout_us": 0, 00:16:27.249 "timeout_admin_us": 0, 00:16:27.249 "keep_alive_timeout_ms": 10000, 00:16:27.249 "arbitration_burst": 0, 00:16:27.249 "low_priority_weight": 0, 00:16:27.249 "medium_priority_weight": 0, 00:16:27.249 "high_priority_weight": 0, 00:16:27.249 "nvme_adminq_poll_period_us": 10000, 00:16:27.249 "nvme_ioq_poll_period_us": 0, 00:16:27.249 "io_queue_requests": 0, 00:16:27.249 "delay_cmd_submit": true, 00:16:27.249 "transport_retry_count": 4, 00:16:27.249 "bdev_retry_count": 3, 00:16:27.249 "transport_ack_timeout": 0, 00:16:27.249 "ctrlr_loss_timeout_sec": 0, 00:16:27.249 "reconnect_delay_sec": 0, 00:16:27.249 "fast_io_fail_timeout_sec": 0, 00:16:27.249 "disable_auto_failback": false, 00:16:27.249 "generate_uuids": false, 00:16:27.249 "transport_tos": 0, 00:16:27.249 "nvme_error_stat": false, 00:16:27.249 "rdma_srq_size": 0, 00:16:27.249 "io_path_stat": false, 00:16:27.249 "allow_accel_sequence": false, 00:16:27.249 "rdma_max_cq_size": 0, 00:16:27.249 "rdma_cm_event_timeout_ms": 0, 00:16:27.249 "dhchap_digests": [ 00:16:27.249 "sha256", 00:16:27.249 "sha384", 00:16:27.249 "sha512" 00:16:27.249 ], 00:16:27.249 "dhchap_dhgroups": [ 00:16:27.249 "null", 00:16:27.249 "ffdhe2048", 00:16:27.250 "ffdhe3072", 00:16:27.250 "ffdhe4096", 00:16:27.250 "ffdhe6144", 00:16:27.250 "ffdhe8192" 00:16:27.250 ] 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "bdev_nvme_set_hotplug", 00:16:27.250 "params": { 00:16:27.250 "period_us": 100000, 00:16:27.250 "enable": false 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "bdev_malloc_create", 00:16:27.250 "params": { 00:16:27.250 "name": "malloc0", 00:16:27.250 "num_blocks": 8192, 00:16:27.250 "block_size": 4096, 00:16:27.250 "physical_block_size": 4096, 00:16:27.250 "uuid": "ce506bac-6d6a-466e-9483-dd474e9a28f9", 00:16:27.250 "optimal_io_boundary": 0, 00:16:27.250 "md_size": 0, 00:16:27.250 "dif_type": 0, 00:16:27.250 "dif_is_head_of_md": false, 00:16:27.250 "dif_pi_format": 0 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "bdev_wait_for_examine" 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "scsi", 00:16:27.250 "config": null 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "scheduler", 00:16:27.250 "config": [ 00:16:27.250 { 00:16:27.250 "method": "framework_set_scheduler", 00:16:27.250 "params": { 00:16:27.250 "name": "static" 00:16:27.250 } 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "vhost_scsi", 00:16:27.250 "config": [] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "vhost_blk", 00:16:27.250 "config": [] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "ublk", 00:16:27.250 "config": [ 00:16:27.250 { 00:16:27.250 "method": "ublk_create_target", 00:16:27.250 "params": { 00:16:27.250 "cpumask": "1" 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "ublk_start_disk", 00:16:27.250 "params": { 00:16:27.250 "bdev_name": "malloc0", 00:16:27.250 "ublk_id": 0, 00:16:27.250 "num_queues": 1, 00:16:27.250 "queue_depth": 128 00:16:27.250 } 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "nbd", 00:16:27.250 "config": [] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "nvmf", 00:16:27.250 "config": [ 00:16:27.250 { 00:16:27.250 "method": "nvmf_set_config", 00:16:27.250 "params": { 00:16:27.250 "discovery_filter": "match_any", 00:16:27.250 "admin_cmd_passthru": { 00:16:27.250 "identify_ctrlr": false 00:16:27.250 }, 00:16:27.250 "dhchap_digests": [ 00:16:27.250 "sha256", 00:16:27.250 "sha384", 00:16:27.250 "sha512" 00:16:27.250 ], 00:16:27.250 "dhchap_dhgroups": [ 00:16:27.250 "null", 00:16:27.250 "ffdhe2048", 00:16:27.250 "ffdhe3072", 00:16:27.250 "ffdhe4096", 00:16:27.250 "ffdhe6144", 00:16:27.250 "ffdhe8192" 00:16:27.250 ] 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "nvmf_set_max_subsystems", 00:16:27.250 "params": { 00:16:27.250 "max_subsystems": 1024 00:16:27.250 } 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "method": "nvmf_set_crdt", 00:16:27.250 "params": { 00:16:27.250 "crdt1": 0, 00:16:27.250 "crdt2": 0, 00:16:27.250 "crdt3": 0 00:16:27.250 } 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "subsystem": "iscsi", 00:16:27.250 "config": [ 00:16:27.250 { 00:16:27.250 "method": "iscsi_set_options", 00:16:27.250 "params": { 00:16:27.250 "node_base": "iqn.2016-06.io.spdk", 00:16:27.250 "max_sessions": 128, 00:16:27.250 "max_connections_per_session": 2, 00:16:27.250 "max_queue_depth": 64, 00:16:27.250 "default_time2wait": 2, 00:16:27.250 "default_time2retain": 20, 00:16:27.250 "first_burst_length": 8192, 00:16:27.250 "immediate_data": true, 00:16:27.250 "allow_duplicated_isid": false, 00:16:27.250 "error_recovery_level": 0, 00:16:27.250 "nop_timeout": 60, 00:16:27.250 "nop_in_interval": 30, 00:16:27.250 "disable_chap": false, 00:16:27.250 "require_chap": false, 00:16:27.250 "mutual_chap": false, 00:16:27.250 "chap_group": 0, 00:16:27.250 "max_large_datain_per_connection": 64, 00:16:27.250 "max_r2t_per_connection": 4, 00:16:27.250 "pdu_pool_size": 36864, 00:16:27.250 "immediate_data_pool_size": 16384, 00:16:27.250 "data_out_pool_size": 2048 00:16:27.250 } 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 }' 00:16:27.250 [2024-11-29 07:48:17.003035] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:27.250 [2024-11-29 07:48:17.003181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73459 ] 00:16:27.250 [2024-11-29 07:48:17.166892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.511 [2024-11-29 07:48:17.287799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.455 [2024-11-29 07:48:18.152468] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.455 [2024-11-29 07:48:18.153402] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.455 [2024-11-29 07:48:18.160612] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:28.455 [2024-11-29 07:48:18.160708] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:28.455 [2024-11-29 07:48:18.160720] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:28.455 [2024-11-29 07:48:18.160728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.455 [2024-11-29 07:48:18.169567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.456 [2024-11-29 07:48:18.169596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.456 [2024-11-29 07:48:18.176486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.456 [2024-11-29 07:48:18.176606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:28.456 [2024-11-29 07:48:18.193469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73459 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73459 ']' 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73459 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73459 00:16:28.456 killing process with pid 73459 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73459' 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73459 00:16:28.456 07:48:18 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73459 00:16:29.842 [2024-11-29 07:48:19.488303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:29.842 [2024-11-29 07:48:19.525473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:29.842 [2024-11-29 07:48:19.525572] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:29.842 [2024-11-29 07:48:19.535460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:29.842 [2024-11-29 07:48:19.535507] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:29.842 [2024-11-29 07:48:19.535514] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:29.842 [2024-11-29 07:48:19.535534] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:29.842 [2024-11-29 07:48:19.535639] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:30.779 07:48:20 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:30.779 00:16:30.779 real 0m7.959s 00:16:30.779 user 0m5.586s 00:16:30.779 sys 0m3.038s 00:16:30.779 07:48:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.779 07:48:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:30.779 ************************************ 00:16:30.779 END TEST test_save_ublk_config 00:16:30.779 ************************************ 00:16:31.039 07:48:20 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73538 00:16:31.039 07:48:20 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.039 07:48:20 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73538 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@835 -- # '[' -z 73538 ']' 00:16:31.039 07:48:20 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.039 07:48:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.039 [2024-11-29 07:48:20.824674] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:31.039 [2024-11-29 07:48:20.824800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73538 ] 00:16:31.039 [2024-11-29 07:48:20.980571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:31.297 [2024-11-29 07:48:21.066637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.297 [2024-11-29 07:48:21.066701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.864 07:48:21 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.864 07:48:21 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:31.864 07:48:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:31.864 07:48:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.864 07:48:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.864 07:48:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.864 ************************************ 00:16:31.864 START TEST test_create_ublk 00:16:31.864 ************************************ 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:31.864 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.864 [2024-11-29 07:48:21.671460] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:31.864 [2024-11-29 07:48:21.672953] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.864 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:31.864 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.864 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.123 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:32.123 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.123 [2024-11-29 07:48:21.834576] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:32.123 [2024-11-29 07:48:21.834885] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:32.123 [2024-11-29 07:48:21.834898] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:32.123 [2024-11-29 07:48:21.834904] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:32.123 [2024-11-29 07:48:21.842482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:32.123 [2024-11-29 07:48:21.842500] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:32.123 [2024-11-29 07:48:21.850465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:32.123 [2024-11-29 07:48:21.850950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:32.123 [2024-11-29 07:48:21.881472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.123 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:32.123 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:32.123 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.123 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 07:48:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:32.124 { 00:16:32.124 "ublk_device": "/dev/ublkb0", 00:16:32.124 "id": 0, 00:16:32.124 "queue_depth": 512, 00:16:32.124 "num_queues": 4, 00:16:32.124 "bdev_name": "Malloc0" 00:16:32.124 } 00:16:32.124 ]' 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:32.124 07:48:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:32.124 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:32.124 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:32.124 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:32.124 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:32.382 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:32.382 07:48:22 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:32.382 07:48:22 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:32.382 fio: verification read phase will never start because write phase uses all of runtime 00:16:32.382 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:32.382 fio-3.35 00:16:32.382 Starting 1 process 00:16:44.582 00:16:44.582 fio_test: (groupid=0, jobs=1): err= 0: pid=73577: Fri Nov 29 07:48:32 2024 00:16:44.582 write: IOPS=15.7k, BW=61.3MiB/s (64.2MB/s)(613MiB/10001msec); 0 zone resets 00:16:44.582 clat (usec): min=39, max=4001, avg=63.01, stdev=98.09 00:16:44.582 lat (usec): min=39, max=4018, avg=63.44, stdev=98.10 00:16:44.582 clat percentiles (usec): 00:16:44.582 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 54], 00:16:44.582 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:16:44.582 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 69], 95.00th=[ 72], 00:16:44.582 | 99.00th=[ 82], 99.50th=[ 90], 99.90th=[ 1975], 99.95th=[ 2835], 00:16:44.582 | 99.99th=[ 3556] 00:16:44.582 bw ( KiB/s): min=56600, max=65568, per=100.00%, avg=62821.05, stdev=1972.12, samples=19 00:16:44.582 iops : min=14150, max=16392, avg=15705.26, stdev=493.03, samples=19 00:16:44.582 lat (usec) : 50=4.68%, 100=94.92%, 250=0.20%, 500=0.02%, 750=0.01% 00:16:44.582 lat (usec) : 1000=0.01% 00:16:44.582 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:16:44.582 cpu : usr=1.86%, sys=10.81%, ctx=156875, majf=0, minf=797 00:16:44.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.582 issued rwts: total=0,156872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.582 00:16:44.582 Run status group 0 (all jobs): 00:16:44.582 WRITE: bw=61.3MiB/s (64.2MB/s), 61.3MiB/s-61.3MiB/s (64.2MB/s-64.2MB/s), io=613MiB (643MB), run=10001-10001msec 00:16:44.583 00:16:44.583 Disk stats (read/write): 00:16:44.583 ublkb0: ios=0/155300, merge=0/0, ticks=0/8515, in_queue=8515, util=99.08% 00:16:44.583 07:48:32 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:32.309966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:44.583 [2024-11-29 07:48:32.337994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:44.583 [2024-11-29 07:48:32.338958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:44.583 [2024-11-29 07:48:32.345476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:44.583 [2024-11-29 07:48:32.345732] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:44.583 [2024-11-29 07:48:32.345741] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:32.369520] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:44.583 request: 00:16:44.583 { 00:16:44.583 "ublk_id": 0, 00:16:44.583 "method": "ublk_stop_disk", 00:16:44.583 "req_id": 1 00:16:44.583 } 00:16:44.583 Got JSON-RPC error response 00:16:44.583 response: 00:16:44.583 { 00:16:44.583 "code": -19, 00:16:44.583 "message": "No such device" 00:16:44.583 } 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.583 07:48:32 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:32.385532] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:44.583 [2024-11-29 07:48:32.393462] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:44.583 [2024-11-29 07:48:32.393492] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:44.583 ************************************ 00:16:44.583 END TEST test_create_ublk 00:16:44.583 ************************************ 00:16:44.583 07:48:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:44.583 00:16:44.583 real 0m11.182s 00:16:44.583 user 0m0.486s 00:16:44.583 sys 0m1.155s 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:32 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:44.583 07:48:32 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.583 07:48:32 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.583 07:48:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 ************************************ 00:16:44.583 START TEST test_create_multi_ublk 00:16:44.583 ************************************ 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:32.893459] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:44.583 [2024-11-29 07:48:32.894919] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:33.109571] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:44.583 [2024-11-29 07:48:33.109879] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:44.583 [2024-11-29 07:48:33.109886] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:44.583 [2024-11-29 07:48:33.109894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.583 [2024-11-29 07:48:33.130240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.583 [2024-11-29 07:48:33.130263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.583 [2024-11-29 07:48:33.140476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.583 [2024-11-29 07:48:33.140980] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:44.583 [2024-11-29 07:48:33.170469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 [2024-11-29 07:48:33.409557] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:44.583 [2024-11-29 07:48:33.409848] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:44.583 [2024-11-29 07:48:33.409856] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:44.583 [2024-11-29 07:48:33.409861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.583 [2024-11-29 07:48:33.421482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.583 [2024-11-29 07:48:33.421499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.583 [2024-11-29 07:48:33.433471] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.583 [2024-11-29 07:48:33.433952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:44.583 [2024-11-29 07:48:33.458475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:44.583 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 [2024-11-29 07:48:33.697555] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:44.584 [2024-11-29 07:48:33.697860] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:44.584 [2024-11-29 07:48:33.697870] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:44.584 [2024-11-29 07:48:33.697876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.584 [2024-11-29 07:48:33.710624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.584 [2024-11-29 07:48:33.710644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.584 [2024-11-29 07:48:33.721472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.584 [2024-11-29 07:48:33.721961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:44.584 [2024-11-29 07:48:33.734483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.584 07:48:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 [2024-11-29 07:48:33.973580] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:44.584 [2024-11-29 07:48:33.973873] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:44.584 [2024-11-29 07:48:33.973885] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:44.584 [2024-11-29 07:48:33.973890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.584 [2024-11-29 07:48:33.985488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.584 [2024-11-29 07:48:33.985506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.584 [2024-11-29 07:48:33.997477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.584 [2024-11-29 07:48:33.997969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:44.584 [2024-11-29 07:48:34.026492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:44.584 { 00:16:44.584 "ublk_device": "/dev/ublkb0", 00:16:44.584 "id": 0, 00:16:44.584 "queue_depth": 512, 00:16:44.584 "num_queues": 4, 00:16:44.584 "bdev_name": "Malloc0" 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "ublk_device": "/dev/ublkb1", 00:16:44.584 "id": 1, 00:16:44.584 "queue_depth": 512, 00:16:44.584 "num_queues": 4, 00:16:44.584 "bdev_name": "Malloc1" 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "ublk_device": "/dev/ublkb2", 00:16:44.584 "id": 2, 00:16:44.584 "queue_depth": 512, 00:16:44.584 "num_queues": 4, 00:16:44.584 "bdev_name": "Malloc2" 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "ublk_device": "/dev/ublkb3", 00:16:44.584 "id": 3, 00:16:44.584 "queue_depth": 512, 00:16:44.584 "num_queues": 4, 00:16:44.584 "bdev_name": "Malloc3" 00:16:44.584 } 00:16:44.584 ]' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.584 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.843 [2024-11-29 07:48:34.705549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:44.843 [2024-11-29 07:48:34.745519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:44.843 [2024-11-29 07:48:34.746411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:44.843 [2024-11-29 07:48:34.753475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:44.843 [2024-11-29 07:48:34.753733] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:44.843 [2024-11-29 07:48:34.753747] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.843 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.843 [2024-11-29 07:48:34.769517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.102 [2024-11-29 07:48:34.814505] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.102 [2024-11-29 07:48:34.815348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.102 [2024-11-29 07:48:34.820462] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.102 [2024-11-29 07:48:34.820713] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:45.102 [2024-11-29 07:48:34.820726] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.102 [2024-11-29 07:48:34.828548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.102 [2024-11-29 07:48:34.869870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.102 [2024-11-29 07:48:34.871023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.102 [2024-11-29 07:48:34.872916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.102 [2024-11-29 07:48:34.873166] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:45.102 [2024-11-29 07:48:34.873179] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.102 [2024-11-29 07:48:34.891528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.102 [2024-11-29 07:48:34.929984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.102 [2024-11-29 07:48:34.930913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.102 [2024-11-29 07:48:34.939486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.102 [2024-11-29 07:48:34.939713] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:45.102 [2024-11-29 07:48:34.939725] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.102 07:48:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:45.359 [2024-11-29 07:48:35.131506] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.359 [2024-11-29 07:48:35.139460] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:45.360 [2024-11-29 07:48:35.139489] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:45.360 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:45.360 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.360 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:45.360 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.360 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.616 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.616 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:45.616 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:45.616 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.616 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.197 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.197 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.197 07:48:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:46.197 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.197 07:48:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.197 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.197 07:48:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.197 07:48:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:46.197 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.197 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:46.455 ************************************ 00:16:46.455 END TEST test_create_multi_ublk 00:16:46.455 ************************************ 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:46.455 00:16:46.455 real 0m3.443s 00:16:46.455 user 0m0.820s 00:16:46.455 sys 0m0.138s 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.455 07:48:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.455 07:48:36 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:46.455 07:48:36 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:46.455 07:48:36 ublk -- ublk/ublk.sh@130 -- # killprocess 73538 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@954 -- # '[' -z 73538 ']' 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@958 -- # kill -0 73538 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@959 -- # uname 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73538 00:16:46.455 killing process with pid 73538 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73538' 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@973 -- # kill 73538 00:16:46.455 07:48:36 ublk -- common/autotest_common.sh@978 -- # wait 73538 00:16:47.021 [2024-11-29 07:48:36.917037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.022 [2024-11-29 07:48:36.917077] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:47.960 00:16:47.960 real 0m24.987s 00:16:47.960 user 0m34.817s 00:16:47.960 sys 0m9.935s 00:16:47.960 07:48:37 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.960 07:48:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.960 ************************************ 00:16:47.961 END TEST ublk 00:16:47.961 ************************************ 00:16:47.961 07:48:37 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:47.961 07:48:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.961 07:48:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.961 07:48:37 -- common/autotest_common.sh@10 -- # set +x 00:16:47.961 ************************************ 00:16:47.961 START TEST ublk_recovery 00:16:47.961 ************************************ 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:47.961 * Looking for test storage... 00:16:47.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.961 07:48:37 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.961 --rc genhtml_branch_coverage=1 00:16:47.961 --rc genhtml_function_coverage=1 00:16:47.961 --rc genhtml_legend=1 00:16:47.961 --rc geninfo_all_blocks=1 00:16:47.961 --rc geninfo_unexecuted_blocks=1 00:16:47.961 00:16:47.961 ' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.961 --rc genhtml_branch_coverage=1 00:16:47.961 --rc genhtml_function_coverage=1 00:16:47.961 --rc genhtml_legend=1 00:16:47.961 --rc geninfo_all_blocks=1 00:16:47.961 --rc geninfo_unexecuted_blocks=1 00:16:47.961 00:16:47.961 ' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.961 --rc genhtml_branch_coverage=1 00:16:47.961 --rc genhtml_function_coverage=1 00:16:47.961 --rc genhtml_legend=1 00:16:47.961 --rc geninfo_all_blocks=1 00:16:47.961 --rc geninfo_unexecuted_blocks=1 00:16:47.961 00:16:47.961 ' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.961 --rc genhtml_branch_coverage=1 00:16:47.961 --rc genhtml_function_coverage=1 00:16:47.961 --rc genhtml_legend=1 00:16:47.961 --rc geninfo_all_blocks=1 00:16:47.961 --rc geninfo_unexecuted_blocks=1 00:16:47.961 00:16:47.961 ' 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:47.961 07:48:37 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:47.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73934 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:47.961 07:48:37 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73934 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73934 ']' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.961 07:48:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.961 [2024-11-29 07:48:37.843850] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:47.961 [2024-11-29 07:48:37.843991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73934 ] 00:16:48.219 [2024-11-29 07:48:38.007436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.219 [2024-11-29 07:48:38.096396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.219 [2024-11-29 07:48:38.096494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:48.833 07:48:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.833 [2024-11-29 07:48:38.667465] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:48.833 [2024-11-29 07:48:38.668959] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.833 07:48:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.833 malloc0 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.833 07:48:38 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.833 07:48:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.833 [2024-11-29 07:48:38.747788] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:48.833 [2024-11-29 07:48:38.747875] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:48.833 [2024-11-29 07:48:38.747884] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:48.833 [2024-11-29 07:48:38.747891] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.833 [2024-11-29 07:48:38.756551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.833 [2024-11-29 07:48:38.756568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.833 [2024-11-29 07:48:38.763469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.833 [2024-11-29 07:48:38.763580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:49.094 [2024-11-29 07:48:38.778481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:49.094 1 00:16:49.094 07:48:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.094 07:48:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:50.028 07:48:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73965 00:16:50.028 07:48:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:50.028 07:48:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:50.028 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.028 fio-3.35 00:16:50.028 Starting 1 process 00:16:55.296 07:48:44 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73934 00:16:55.296 07:48:44 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:00.589 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73934 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:00.589 07:48:49 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74077 00:17:00.589 07:48:49 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:00.589 07:48:49 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.589 07:48:49 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74077 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74077 ']' 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.589 07:48:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.589 [2024-11-29 07:48:49.868705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:00.589 [2024-11-29 07:48:49.868925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74077 ] 00:17:00.589 [2024-11-29 07:48:50.025051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:00.589 [2024-11-29 07:48:50.131487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.589 [2024-11-29 07:48:50.131514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:00.850 07:48:50 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.850 [2024-11-29 07:48:50.737465] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:00.850 [2024-11-29 07:48:50.739322] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.850 07:48:50 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.850 07:48:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.112 malloc0 00:17:01.112 07:48:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.112 07:48:50 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:01.112 07:48:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.112 07:48:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.112 [2024-11-29 07:48:50.841611] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:01.112 [2024-11-29 07:48:50.841644] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:01.112 [2024-11-29 07:48:50.841654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:01.112 [2024-11-29 07:48:50.847472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:01.112 [2024-11-29 07:48:50.847492] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:01.112 1 00:17:01.112 07:48:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.112 07:48:50 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73965 00:17:02.054 [2024-11-29 07:48:51.847517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.054 [2024-11-29 07:48:51.857469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.054 [2024-11-29 07:48:51.857492] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.990 [2024-11-29 07:48:52.857519] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.990 [2024-11-29 07:48:52.865477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.990 [2024-11-29 07:48:52.865495] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:03.924 [2024-11-29 07:48:53.865520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:04.183 [2024-11-29 07:48:53.872470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:04.183 [2024-11-29 07:48:53.872485] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:04.183 [2024-11-29 07:48:53.872493] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:04.183 [2024-11-29 07:48:53.872564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:26.111 [2024-11-29 07:49:15.022469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:26.111 [2024-11-29 07:49:15.028125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:26.111 [2024-11-29 07:49:15.034630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:26.111 [2024-11-29 07:49:15.034648] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:52.648 00:17:52.648 fio_test: (groupid=0, jobs=1): err= 0: pid=73968: Fri Nov 29 07:49:40 2024 00:17:52.648 read: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(3423MiB/60002msec) 00:17:52.648 slat (nsec): min=1066, max=201016, avg=4944.61, stdev=1337.06 00:17:52.648 clat (usec): min=992, max=30252k, avg=4167.18, stdev=248205.83 00:17:52.648 lat (usec): min=1002, max=30252k, avg=4172.13, stdev=248205.83 00:17:52.648 clat percentiles (usec): 00:17:52.648 | 1.00th=[ 1778], 5.00th=[ 1909], 10.00th=[ 1926], 20.00th=[ 1958], 00:17:52.648 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 1991], 60.00th=[ 2008], 00:17:52.648 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2089], 95.00th=[ 3097], 00:17:52.648 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7767], 99.95th=[12518], 00:17:52.648 | 99.99th=[13173] 00:17:52.648 bw ( KiB/s): min=56200, max=122960, per=100.00%, avg=116885.56, stdev=13656.05, samples=59 00:17:52.648 iops : min=14050, max=30740, avg=29221.39, stdev=3414.01, samples=59 00:17:52.648 write: IOPS=14.6k, BW=57.0MiB/s (59.7MB/s)(3417MiB/60002msec); 0 zone resets 00:17:52.648 slat (nsec): min=1082, max=121973, avg=4976.89, stdev=1377.50 00:17:52.648 clat (usec): min=1084, max=30252k, avg=4594.24, stdev=268630.44 00:17:52.648 lat (usec): min=1093, max=30252k, avg=4599.21, stdev=268630.44 00:17:52.648 clat percentiles (usec): 00:17:52.648 | 1.00th=[ 1827], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2040], 00:17:52.648 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:52.648 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2180], 95.00th=[ 2999], 00:17:52.648 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[12518], 00:17:52.648 | 99.99th=[13304] 00:17:52.648 bw ( KiB/s): min=56072, max=122928, per=100.00%, avg=116729.63, stdev=13598.63, samples=59 00:17:52.648 iops : min=14018, max=30732, avg=29182.41, stdev=3399.65, samples=59 00:17:52.648 lat (usec) : 1000=0.01% 00:17:52.648 lat (msec) : 2=29.42%, 4=67.64%, 10=2.87%, 20=0.05%, >=2000=0.01% 00:17:52.648 cpu : usr=3.30%, sys=14.89%, ctx=57678, majf=0, minf=13 00:17:52.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:52.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:52.648 issued rwts: total=876221,874828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:52.648 00:17:52.648 Run status group 0 (all jobs): 00:17:52.648 READ: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=3423MiB (3589MB), run=60002-60002msec 00:17:52.648 WRITE: bw=57.0MiB/s (59.7MB/s), 57.0MiB/s-57.0MiB/s (59.7MB/s-59.7MB/s), io=3417MiB (3583MB), run=60002-60002msec 00:17:52.648 00:17:52.648 Disk stats (read/write): 00:17:52.648 ublkb1: ios=872835/871540, merge=0/0, ticks=3599157/3895267, in_queue=7494424, util=99.89% 00:17:52.648 07:49:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 [2024-11-29 07:49:40.044647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.648 [2024-11-29 07:49:40.075581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.648 [2024-11-29 07:49:40.075725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.648 [2024-11-29 07:49:40.083486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.648 [2024-11-29 07:49:40.083578] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:52.648 [2024-11-29 07:49:40.083584] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.648 07:49:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 [2024-11-29 07:49:40.098550] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:52.648 [2024-11-29 07:49:40.102918] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:52.648 [2024-11-29 07:49:40.102951] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.648 07:49:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:52.648 07:49:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:52.648 07:49:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74077 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74077 ']' 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74077 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74077 00:17:52.648 killing process with pid 74077 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74077' 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74077 00:17:52.648 07:49:40 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74077 00:17:52.648 [2024-11-29 07:49:41.160908] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:52.648 [2024-11-29 07:49:41.160966] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:52.648 ************************************ 00:17:52.648 END TEST ublk_recovery 00:17:52.648 ************************************ 00:17:52.648 00:17:52.648 real 1m4.254s 00:17:52.648 user 1m46.686s 00:17:52.648 sys 0m22.035s 00:17:52.648 07:49:41 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.648 07:49:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 07:49:41 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:52.648 07:49:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:52.648 07:49:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.648 07:49:41 -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 07:49:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:52.648 07:49:41 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.648 07:49:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.648 07:49:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.648 07:49:41 -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 ************************************ 00:17:52.648 START TEST ftl 00:17:52.648 ************************************ 00:17:52.648 07:49:41 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.648 * Looking for test storage... 00:17:52.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.648 07:49:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.648 07:49:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.648 07:49:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.648 07:49:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.648 07:49:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.648 07:49:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:52.648 07:49:42 ftl -- scripts/common.sh@345 -- # : 1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.648 07:49:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.648 07:49:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@353 -- # local d=1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.648 07:49:42 ftl -- scripts/common.sh@355 -- # echo 1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.648 07:49:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@353 -- # local d=2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.648 07:49:42 ftl -- scripts/common.sh@355 -- # echo 2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.648 07:49:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.648 07:49:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.648 07:49:42 ftl -- scripts/common.sh@368 -- # return 0 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.648 --rc genhtml_branch_coverage=1 00:17:52.648 --rc genhtml_function_coverage=1 00:17:52.648 --rc genhtml_legend=1 00:17:52.648 --rc geninfo_all_blocks=1 00:17:52.648 --rc geninfo_unexecuted_blocks=1 00:17:52.648 00:17:52.648 ' 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.648 --rc genhtml_branch_coverage=1 00:17:52.648 --rc genhtml_function_coverage=1 00:17:52.648 --rc genhtml_legend=1 00:17:52.648 --rc geninfo_all_blocks=1 00:17:52.648 --rc geninfo_unexecuted_blocks=1 00:17:52.648 00:17:52.648 ' 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.648 --rc genhtml_branch_coverage=1 00:17:52.648 --rc genhtml_function_coverage=1 00:17:52.648 --rc genhtml_legend=1 00:17:52.648 --rc geninfo_all_blocks=1 00:17:52.648 --rc geninfo_unexecuted_blocks=1 00:17:52.648 00:17:52.648 ' 00:17:52.648 07:49:42 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.648 --rc genhtml_branch_coverage=1 00:17:52.648 --rc genhtml_function_coverage=1 00:17:52.648 --rc genhtml_legend=1 00:17:52.648 --rc geninfo_all_blocks=1 00:17:52.648 --rc geninfo_unexecuted_blocks=1 00:17:52.649 00:17:52.649 ' 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.649 07:49:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.649 07:49:42 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.649 07:49:42 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.649 07:49:42 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.649 07:49:42 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.649 07:49:42 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.649 07:49:42 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.649 07:49:42 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.649 07:49:42 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.649 07:49:42 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.649 07:49:42 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.649 07:49:42 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.649 07:49:42 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.649 07:49:42 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.649 07:49:42 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.649 07:49:42 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.649 07:49:42 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.649 07:49:42 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.649 07:49:42 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.649 07:49:42 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.649 07:49:42 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.649 07:49:42 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.649 07:49:42 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:52.649 07:49:42 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:52.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:52.649 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.649 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.649 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.649 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.909 07:49:42 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74882 00:17:52.909 07:49:42 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:52.909 07:49:42 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74882 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 74882 ']' 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.909 07:49:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.909 [2024-11-29 07:49:42.669366] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:52.909 [2024-11-29 07:49:42.669709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74882 ] 00:17:52.909 [2024-11-29 07:49:42.834054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.171 [2024-11-29 07:49:42.930170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.742 07:49:43 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.742 07:49:43 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:53.742 07:49:43 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:54.003 07:49:43 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:54.948 07:49:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:54.948 07:49:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:55.211 07:49:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:55.211 07:49:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:55.211 07:49:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@50 -- # break 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:55.471 07:49:45 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:55.731 07:49:45 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:55.731 07:49:45 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:55.731 07:49:45 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:55.731 07:49:45 ftl -- ftl/ftl.sh@63 -- # break 00:17:55.732 07:49:45 ftl -- ftl/ftl.sh@66 -- # killprocess 74882 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@954 -- # '[' -z 74882 ']' 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@958 -- # kill -0 74882 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@959 -- # uname 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74882 00:17:55.732 killing process with pid 74882 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74882' 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@973 -- # kill 74882 00:17:55.732 07:49:45 ftl -- common/autotest_common.sh@978 -- # wait 74882 00:17:57.112 07:49:46 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:57.112 07:49:46 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:57.112 07:49:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:57.112 07:49:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.112 07:49:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:57.112 ************************************ 00:17:57.112 START TEST ftl_fio_basic 00:17:57.112 ************************************ 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:57.112 * Looking for test storage... 00:17:57.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.112 --rc genhtml_branch_coverage=1 00:17:57.112 --rc genhtml_function_coverage=1 00:17:57.112 --rc genhtml_legend=1 00:17:57.112 --rc geninfo_all_blocks=1 00:17:57.112 --rc geninfo_unexecuted_blocks=1 00:17:57.112 00:17:57.112 ' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.112 --rc genhtml_branch_coverage=1 00:17:57.112 --rc genhtml_function_coverage=1 00:17:57.112 --rc genhtml_legend=1 00:17:57.112 --rc geninfo_all_blocks=1 00:17:57.112 --rc geninfo_unexecuted_blocks=1 00:17:57.112 00:17:57.112 ' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.112 --rc genhtml_branch_coverage=1 00:17:57.112 --rc genhtml_function_coverage=1 00:17:57.112 --rc genhtml_legend=1 00:17:57.112 --rc geninfo_all_blocks=1 00:17:57.112 --rc geninfo_unexecuted_blocks=1 00:17:57.112 00:17:57.112 ' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.112 --rc genhtml_branch_coverage=1 00:17:57.112 --rc genhtml_function_coverage=1 00:17:57.112 --rc genhtml_legend=1 00:17:57.112 --rc geninfo_all_blocks=1 00:17:57.112 --rc geninfo_unexecuted_blocks=1 00:17:57.112 00:17:57.112 ' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75014 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75014 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75014 ']' 00:17:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.112 07:49:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:57.112 [2024-11-29 07:49:46.950808] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:57.113 [2024-11-29 07:49:46.950924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75014 ] 00:17:57.372 [2024-11-29 07:49:47.107341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.372 [2024-11-29 07:49:47.184785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.372 [2024-11-29 07:49:47.185102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.372 [2024-11-29 07:49:47.185130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:57.940 07:49:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.199 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.458 { 00:17:58.458 "name": "nvme0n1", 00:17:58.458 "aliases": [ 00:17:58.458 "4df8a9a3-bf54-4024-938b-37d04e575f69" 00:17:58.458 ], 00:17:58.458 "product_name": "NVMe disk", 00:17:58.458 "block_size": 4096, 00:17:58.458 "num_blocks": 1310720, 00:17:58.458 "uuid": "4df8a9a3-bf54-4024-938b-37d04e575f69", 00:17:58.458 "numa_id": -1, 00:17:58.458 "assigned_rate_limits": { 00:17:58.458 "rw_ios_per_sec": 0, 00:17:58.458 "rw_mbytes_per_sec": 0, 00:17:58.458 "r_mbytes_per_sec": 0, 00:17:58.458 "w_mbytes_per_sec": 0 00:17:58.458 }, 00:17:58.458 "claimed": false, 00:17:58.458 "zoned": false, 00:17:58.458 "supported_io_types": { 00:17:58.458 "read": true, 00:17:58.458 "write": true, 00:17:58.458 "unmap": true, 00:17:58.458 "flush": true, 00:17:58.458 "reset": true, 00:17:58.458 "nvme_admin": true, 00:17:58.458 "nvme_io": true, 00:17:58.458 "nvme_io_md": false, 00:17:58.458 "write_zeroes": true, 00:17:58.458 "zcopy": false, 00:17:58.458 "get_zone_info": false, 00:17:58.458 "zone_management": false, 00:17:58.458 "zone_append": false, 00:17:58.458 "compare": true, 00:17:58.458 "compare_and_write": false, 00:17:58.458 "abort": true, 00:17:58.458 "seek_hole": false, 00:17:58.458 "seek_data": false, 00:17:58.458 "copy": true, 00:17:58.458 "nvme_iov_md": false 00:17:58.458 }, 00:17:58.458 "driver_specific": { 00:17:58.458 "nvme": [ 00:17:58.458 { 00:17:58.458 "pci_address": "0000:00:11.0", 00:17:58.458 "trid": { 00:17:58.458 "trtype": "PCIe", 00:17:58.458 "traddr": "0000:00:11.0" 00:17:58.458 }, 00:17:58.458 "ctrlr_data": { 00:17:58.458 "cntlid": 0, 00:17:58.458 "vendor_id": "0x1b36", 00:17:58.458 "model_number": "QEMU NVMe Ctrl", 00:17:58.458 "serial_number": "12341", 00:17:58.458 "firmware_revision": "8.0.0", 00:17:58.458 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:58.458 "oacs": { 00:17:58.458 "security": 0, 00:17:58.458 "format": 1, 00:17:58.458 "firmware": 0, 00:17:58.458 "ns_manage": 1 00:17:58.458 }, 00:17:58.458 "multi_ctrlr": false, 00:17:58.458 "ana_reporting": false 00:17:58.458 }, 00:17:58.458 "vs": { 00:17:58.458 "nvme_version": "1.4" 00:17:58.458 }, 00:17:58.458 "ns_data": { 00:17:58.458 "id": 1, 00:17:58.458 "can_share": false 00:17:58.458 } 00:17:58.458 } 00:17:58.458 ], 00:17:58.458 "mp_policy": "active_passive" 00:17:58.458 } 00:17:58.458 } 00:17:58.458 ]' 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:58.458 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:58.717 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:58.717 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=711b3575-e2e8-44e7-bcaa-14176a34a32c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 711b3575-e2e8-44e7-bcaa-14176a34a32c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.977 07:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.330 { 00:17:59.330 "name": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:17:59.330 "aliases": [ 00:17:59.330 "lvs/nvme0n1p0" 00:17:59.330 ], 00:17:59.330 "product_name": "Logical Volume", 00:17:59.330 "block_size": 4096, 00:17:59.330 "num_blocks": 26476544, 00:17:59.330 "uuid": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:17:59.330 "assigned_rate_limits": { 00:17:59.330 "rw_ios_per_sec": 0, 00:17:59.330 "rw_mbytes_per_sec": 0, 00:17:59.330 "r_mbytes_per_sec": 0, 00:17:59.330 "w_mbytes_per_sec": 0 00:17:59.330 }, 00:17:59.330 "claimed": false, 00:17:59.330 "zoned": false, 00:17:59.330 "supported_io_types": { 00:17:59.330 "read": true, 00:17:59.330 "write": true, 00:17:59.330 "unmap": true, 00:17:59.330 "flush": false, 00:17:59.330 "reset": true, 00:17:59.330 "nvme_admin": false, 00:17:59.330 "nvme_io": false, 00:17:59.330 "nvme_io_md": false, 00:17:59.330 "write_zeroes": true, 00:17:59.330 "zcopy": false, 00:17:59.330 "get_zone_info": false, 00:17:59.330 "zone_management": false, 00:17:59.330 "zone_append": false, 00:17:59.330 "compare": false, 00:17:59.330 "compare_and_write": false, 00:17:59.330 "abort": false, 00:17:59.330 "seek_hole": true, 00:17:59.330 "seek_data": true, 00:17:59.330 "copy": false, 00:17:59.330 "nvme_iov_md": false 00:17:59.330 }, 00:17:59.330 "driver_specific": { 00:17:59.330 "lvol": { 00:17:59.330 "lvol_store_uuid": "711b3575-e2e8-44e7-bcaa-14176a34a32c", 00:17:59.330 "base_bdev": "nvme0n1", 00:17:59.330 "thin_provision": true, 00:17:59.330 "num_allocated_clusters": 0, 00:17:59.330 "snapshot": false, 00:17:59.330 "clone": false, 00:17:59.330 "esnap_clone": false 00:17:59.330 } 00:17:59.330 } 00:17:59.330 } 00:17:59.330 ]' 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:59.330 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:59.604 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.863 { 00:17:59.863 "name": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:17:59.863 "aliases": [ 00:17:59.863 "lvs/nvme0n1p0" 00:17:59.863 ], 00:17:59.863 "product_name": "Logical Volume", 00:17:59.863 "block_size": 4096, 00:17:59.863 "num_blocks": 26476544, 00:17:59.863 "uuid": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:17:59.863 "assigned_rate_limits": { 00:17:59.863 "rw_ios_per_sec": 0, 00:17:59.863 "rw_mbytes_per_sec": 0, 00:17:59.863 "r_mbytes_per_sec": 0, 00:17:59.863 "w_mbytes_per_sec": 0 00:17:59.863 }, 00:17:59.863 "claimed": false, 00:17:59.863 "zoned": false, 00:17:59.863 "supported_io_types": { 00:17:59.863 "read": true, 00:17:59.863 "write": true, 00:17:59.863 "unmap": true, 00:17:59.863 "flush": false, 00:17:59.863 "reset": true, 00:17:59.863 "nvme_admin": false, 00:17:59.863 "nvme_io": false, 00:17:59.863 "nvme_io_md": false, 00:17:59.863 "write_zeroes": true, 00:17:59.863 "zcopy": false, 00:17:59.863 "get_zone_info": false, 00:17:59.863 "zone_management": false, 00:17:59.863 "zone_append": false, 00:17:59.863 "compare": false, 00:17:59.863 "compare_and_write": false, 00:17:59.863 "abort": false, 00:17:59.863 "seek_hole": true, 00:17:59.863 "seek_data": true, 00:17:59.863 "copy": false, 00:17:59.863 "nvme_iov_md": false 00:17:59.863 }, 00:17:59.863 "driver_specific": { 00:17:59.863 "lvol": { 00:17:59.863 "lvol_store_uuid": "711b3575-e2e8-44e7-bcaa-14176a34a32c", 00:17:59.863 "base_bdev": "nvme0n1", 00:17:59.863 "thin_provision": true, 00:17:59.863 "num_allocated_clusters": 0, 00:17:59.863 "snapshot": false, 00:17:59.863 "clone": false, 00:17:59.863 "esnap_clone": false 00:17:59.863 } 00:17:59.863 } 00:17:59.863 } 00:17:59.863 ]' 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:59.863 07:49:49 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:00.121 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:00.121 07:49:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c 00:18:00.378 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:00.378 { 00:18:00.378 "name": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:18:00.378 "aliases": [ 00:18:00.378 "lvs/nvme0n1p0" 00:18:00.378 ], 00:18:00.378 "product_name": "Logical Volume", 00:18:00.378 "block_size": 4096, 00:18:00.378 "num_blocks": 26476544, 00:18:00.378 "uuid": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:18:00.378 "assigned_rate_limits": { 00:18:00.378 "rw_ios_per_sec": 0, 00:18:00.378 "rw_mbytes_per_sec": 0, 00:18:00.378 "r_mbytes_per_sec": 0, 00:18:00.378 "w_mbytes_per_sec": 0 00:18:00.378 }, 00:18:00.378 "claimed": false, 00:18:00.378 "zoned": false, 00:18:00.378 "supported_io_types": { 00:18:00.378 "read": true, 00:18:00.378 "write": true, 00:18:00.378 "unmap": true, 00:18:00.378 "flush": false, 00:18:00.378 "reset": true, 00:18:00.378 "nvme_admin": false, 00:18:00.378 "nvme_io": false, 00:18:00.379 "nvme_io_md": false, 00:18:00.379 "write_zeroes": true, 00:18:00.379 "zcopy": false, 00:18:00.379 "get_zone_info": false, 00:18:00.379 "zone_management": false, 00:18:00.379 "zone_append": false, 00:18:00.379 "compare": false, 00:18:00.379 "compare_and_write": false, 00:18:00.379 "abort": false, 00:18:00.379 "seek_hole": true, 00:18:00.379 "seek_data": true, 00:18:00.379 "copy": false, 00:18:00.379 "nvme_iov_md": false 00:18:00.379 }, 00:18:00.379 "driver_specific": { 00:18:00.379 "lvol": { 00:18:00.379 "lvol_store_uuid": "711b3575-e2e8-44e7-bcaa-14176a34a32c", 00:18:00.379 "base_bdev": "nvme0n1", 00:18:00.379 "thin_provision": true, 00:18:00.379 "num_allocated_clusters": 0, 00:18:00.379 "snapshot": false, 00:18:00.379 "clone": false, 00:18:00.379 "esnap_clone": false 00:18:00.379 } 00:18:00.379 } 00:18:00.379 } 00:18:00.379 ]' 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:00.379 07:49:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c3c5f8f-6e1f-409b-b88d-733e70c15d8c -c nvc0n1p0 --l2p_dram_limit 60 00:18:00.647 [2024-11-29 07:49:50.338306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.338345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.647 [2024-11-29 07:49:50.338357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.647 [2024-11-29 07:49:50.338363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.338424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.338432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.647 [2024-11-29 07:49:50.338441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:00.647 [2024-11-29 07:49:50.338456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.338486] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.647 [2024-11-29 07:49:50.339082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.647 [2024-11-29 07:49:50.339103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.339110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.647 [2024-11-29 07:49:50.339118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:18:00.647 [2024-11-29 07:49:50.339124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.339184] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8297c66a-1194-4613-aec7-e9348cfa1ab6 00:18:00.647 [2024-11-29 07:49:50.340198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.340226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:00.647 [2024-11-29 07:49:50.340233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:00.647 [2024-11-29 07:49:50.340240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.345388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.345417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.647 [2024-11-29 07:49:50.345425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.092 ms 00:18:00.647 [2024-11-29 07:49:50.345436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.345523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.345532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.647 [2024-11-29 07:49:50.345539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:00.647 [2024-11-29 07:49:50.345548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.345596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.345605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.647 [2024-11-29 07:49:50.345611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:00.647 [2024-11-29 07:49:50.345619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.345643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.647 [2024-11-29 07:49:50.348504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.348528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.647 [2024-11-29 07:49:50.348540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:18:00.647 [2024-11-29 07:49:50.348545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.348582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.348589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.647 [2024-11-29 07:49:50.348596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:00.647 [2024-11-29 07:49:50.348602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.647 [2024-11-29 07:49:50.348623] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:00.647 [2024-11-29 07:49:50.348735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.647 [2024-11-29 07:49:50.348753] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.647 [2024-11-29 07:49:50.348762] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.647 [2024-11-29 07:49:50.348770] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.647 [2024-11-29 07:49:50.348777] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.647 [2024-11-29 07:49:50.348785] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.647 [2024-11-29 07:49:50.348791] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.647 [2024-11-29 07:49:50.348798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.647 [2024-11-29 07:49:50.348803] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.647 [2024-11-29 07:49:50.348812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.647 [2024-11-29 07:49:50.348817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.648 [2024-11-29 07:49:50.348825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:00.648 [2024-11-29 07:49:50.348831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.648 [2024-11-29 07:49:50.348904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.648 [2024-11-29 07:49:50.348910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.648 [2024-11-29 07:49:50.348918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:00.648 [2024-11-29 07:49:50.348929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.648 [2024-11-29 07:49:50.349023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.648 [2024-11-29 07:49:50.349032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.648 [2024-11-29 07:49:50.349040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.648 [2024-11-29 07:49:50.349057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.648 [2024-11-29 07:49:50.349075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.648 [2024-11-29 07:49:50.349086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.648 [2024-11-29 07:49:50.349092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.648 [2024-11-29 07:49:50.349098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.648 [2024-11-29 07:49:50.349104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.648 [2024-11-29 07:49:50.349110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.648 [2024-11-29 07:49:50.349118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.648 [2024-11-29 07:49:50.349132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.648 [2024-11-29 07:49:50.349150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.648 [2024-11-29 07:49:50.349166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.648 [2024-11-29 07:49:50.349184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.648 [2024-11-29 07:49:50.349200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.648 [2024-11-29 07:49:50.349219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.648 [2024-11-29 07:49:50.349239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.648 [2024-11-29 07:49:50.349244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.648 [2024-11-29 07:49:50.349250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.648 [2024-11-29 07:49:50.349255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.648 [2024-11-29 07:49:50.349262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.648 [2024-11-29 07:49:50.349266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.648 [2024-11-29 07:49:50.349277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.648 [2024-11-29 07:49:50.349284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349289] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.648 [2024-11-29 07:49:50.349296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.648 [2024-11-29 07:49:50.349301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.648 [2024-11-29 07:49:50.349315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.648 [2024-11-29 07:49:50.349323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.648 [2024-11-29 07:49:50.349328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.648 [2024-11-29 07:49:50.349334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.648 [2024-11-29 07:49:50.349340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.648 [2024-11-29 07:49:50.349347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.648 [2024-11-29 07:49:50.349354] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.648 [2024-11-29 07:49:50.349362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.648 [2024-11-29 07:49:50.349376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.648 [2024-11-29 07:49:50.349381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.648 [2024-11-29 07:49:50.349388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.648 [2024-11-29 07:49:50.349393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.648 [2024-11-29 07:49:50.349400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.648 [2024-11-29 07:49:50.349405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.648 [2024-11-29 07:49:50.349412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.648 [2024-11-29 07:49:50.349417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.648 [2024-11-29 07:49:50.349424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.648 [2024-11-29 07:49:50.349467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.648 [2024-11-29 07:49:50.349475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.648 [2024-11-29 07:49:50.349488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.648 [2024-11-29 07:49:50.349494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.648 [2024-11-29 07:49:50.349500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.648 [2024-11-29 07:49:50.349506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.648 [2024-11-29 07:49:50.349513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.648 [2024-11-29 07:49:50.349519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:00.648 [2024-11-29 07:49:50.349526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.648 [2024-11-29 07:49:50.349595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:00.648 [2024-11-29 07:49:50.349605] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:02.551 [2024-11-29 07:49:52.386285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.551 [2024-11-29 07:49:52.386340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:02.551 [2024-11-29 07:49:52.386354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2036.682 ms 00:18:02.551 [2024-11-29 07:49:52.386365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.551 [2024-11-29 07:49:52.411851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.551 [2024-11-29 07:49:52.411892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.551 [2024-11-29 07:49:52.411904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.282 ms 00:18:02.552 [2024-11-29 07:49:52.411914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.412043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.412055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:02.552 [2024-11-29 07:49:52.412064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:02.552 [2024-11-29 07:49:52.412075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.459257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.459300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.552 [2024-11-29 07:49:52.459312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.118 ms 00:18:02.552 [2024-11-29 07:49:52.459323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.459364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.459375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.552 [2024-11-29 07:49:52.459383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:02.552 [2024-11-29 07:49:52.459392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.459761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.459794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.552 [2024-11-29 07:49:52.459804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:02.552 [2024-11-29 07:49:52.459813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.459935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.459951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.552 [2024-11-29 07:49:52.459959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:02.552 [2024-11-29 07:49:52.459970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.474340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.552 [2024-11-29 07:49:52.474372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.552 [2024-11-29 07:49:52.474381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.341 ms 00:18:02.552 [2024-11-29 07:49:52.474390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.552 [2024-11-29 07:49:52.485767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:02.813 [2024-11-29 07:49:52.499609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.813 [2024-11-29 07:49:52.499643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.813 [2024-11-29 07:49:52.499655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:18:02.813 [2024-11-29 07:49:52.499662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.813 [2024-11-29 07:49:52.545844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.813 [2024-11-29 07:49:52.545885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:02.813 [2024-11-29 07:49:52.545899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.146 ms 00:18:02.813 [2024-11-29 07:49:52.545907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.813 [2024-11-29 07:49:52.546096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.813 [2024-11-29 07:49:52.546106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.813 [2024-11-29 07:49:52.546118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:02.813 [2024-11-29 07:49:52.546125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.813 [2024-11-29 07:49:52.569213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.813 [2024-11-29 07:49:52.569242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:02.813 [2024-11-29 07:49:52.569255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.023 ms 00:18:02.814 [2024-11-29 07:49:52.569263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.592199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.592228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:02.814 [2024-11-29 07:49:52.592240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:18:02.814 [2024-11-29 07:49:52.592247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.592812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.592829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.814 [2024-11-29 07:49:52.592839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:18:02.814 [2024-11-29 07:49:52.592846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.656815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.656851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:02.814 [2024-11-29 07:49:52.656865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.926 ms 00:18:02.814 [2024-11-29 07:49:52.656873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.681273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.681304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:02.814 [2024-11-29 07:49:52.681316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.295 ms 00:18:02.814 [2024-11-29 07:49:52.681324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.704675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.704704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:02.814 [2024-11-29 07:49:52.704717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.302 ms 00:18:02.814 [2024-11-29 07:49:52.704723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.728660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.728690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.814 [2024-11-29 07:49:52.728702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.885 ms 00:18:02.814 [2024-11-29 07:49:52.728709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.728762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.728773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.814 [2024-11-29 07:49:52.728785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.814 [2024-11-29 07:49:52.728792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.728879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.814 [2024-11-29 07:49:52.728889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.814 [2024-11-29 07:49:52.728899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:02.814 [2024-11-29 07:49:52.728907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.814 [2024-11-29 07:49:52.729817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2391.077 ms, result 0 00:18:02.814 { 00:18:02.814 "name": "ftl0", 00:18:02.814 "uuid": "8297c66a-1194-4613-aec7-e9348cfa1ab6" 00:18:02.814 } 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.814 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.073 07:49:52 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:03.332 [ 00:18:03.332 { 00:18:03.332 "name": "ftl0", 00:18:03.332 "aliases": [ 00:18:03.332 "8297c66a-1194-4613-aec7-e9348cfa1ab6" 00:18:03.332 ], 00:18:03.332 "product_name": "FTL disk", 00:18:03.332 "block_size": 4096, 00:18:03.332 "num_blocks": 20971520, 00:18:03.332 "uuid": "8297c66a-1194-4613-aec7-e9348cfa1ab6", 00:18:03.332 "assigned_rate_limits": { 00:18:03.332 "rw_ios_per_sec": 0, 00:18:03.332 "rw_mbytes_per_sec": 0, 00:18:03.332 "r_mbytes_per_sec": 0, 00:18:03.332 "w_mbytes_per_sec": 0 00:18:03.332 }, 00:18:03.332 "claimed": false, 00:18:03.332 "zoned": false, 00:18:03.332 "supported_io_types": { 00:18:03.332 "read": true, 00:18:03.332 "write": true, 00:18:03.332 "unmap": true, 00:18:03.332 "flush": true, 00:18:03.332 "reset": false, 00:18:03.332 "nvme_admin": false, 00:18:03.332 "nvme_io": false, 00:18:03.332 "nvme_io_md": false, 00:18:03.332 "write_zeroes": true, 00:18:03.332 "zcopy": false, 00:18:03.332 "get_zone_info": false, 00:18:03.332 "zone_management": false, 00:18:03.332 "zone_append": false, 00:18:03.332 "compare": false, 00:18:03.332 "compare_and_write": false, 00:18:03.332 "abort": false, 00:18:03.332 "seek_hole": false, 00:18:03.332 "seek_data": false, 00:18:03.332 "copy": false, 00:18:03.332 "nvme_iov_md": false 00:18:03.332 }, 00:18:03.332 "driver_specific": { 00:18:03.332 "ftl": { 00:18:03.332 "base_bdev": "1c3c5f8f-6e1f-409b-b88d-733e70c15d8c", 00:18:03.332 "cache": "nvc0n1p0" 00:18:03.332 } 00:18:03.332 } 00:18:03.332 } 00:18:03.332 ] 00:18:03.332 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:03.332 07:49:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:03.332 07:49:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:03.591 07:49:53 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:03.591 07:49:53 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:03.851 [2024-11-29 07:49:53.559353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.559496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:03.851 [2024-11-29 07:49:53.559515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.851 [2024-11-29 07:49:53.559523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.559563] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.851 [2024-11-29 07:49:53.561705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.561728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:03.851 [2024-11-29 07:49:53.561738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:18:03.851 [2024-11-29 07:49:53.561746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.562239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.562255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:03.851 [2024-11-29 07:49:53.562264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:18:03.851 [2024-11-29 07:49:53.562271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.564732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.564746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:03.851 [2024-11-29 07:49:53.564755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:18:03.851 [2024-11-29 07:49:53.564761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.569418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.569437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:03.851 [2024-11-29 07:49:53.569454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:18:03.851 [2024-11-29 07:49:53.569461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.587722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.587749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:03.851 [2024-11-29 07:49:53.587770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.197 ms 00:18:03.851 [2024-11-29 07:49:53.587777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.599281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.599309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:03.851 [2024-11-29 07:49:53.599321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.456 ms 00:18:03.851 [2024-11-29 07:49:53.599328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.599523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.599532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:03.851 [2024-11-29 07:49:53.599541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:18:03.851 [2024-11-29 07:49:53.599547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.617306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.617400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:03.851 [2024-11-29 07:49:53.617415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.733 ms 00:18:03.851 [2024-11-29 07:49:53.617420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.634781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.634811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:03.851 [2024-11-29 07:49:53.634820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.307 ms 00:18:03.851 [2024-11-29 07:49:53.634825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.651813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.651901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:03.851 [2024-11-29 07:49:53.651915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:18:03.851 [2024-11-29 07:49:53.651921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.669178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.851 [2024-11-29 07:49:53.669202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:03.851 [2024-11-29 07:49:53.669211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.167 ms 00:18:03.851 [2024-11-29 07:49:53.669216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.851 [2024-11-29 07:49:53.669256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:03.851 [2024-11-29 07:49:53.669266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:03.851 [2024-11-29 07:49:53.669426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:03.852 [2024-11-29 07:49:53.669953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:03.852 [2024-11-29 07:49:53.669960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8297c66a-1194-4613-aec7-e9348cfa1ab6 00:18:03.852 [2024-11-29 07:49:53.669966] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:03.852 [2024-11-29 07:49:53.669975] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:03.852 [2024-11-29 07:49:53.669981] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:03.852 [2024-11-29 07:49:53.669988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:03.852 [2024-11-29 07:49:53.669993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:03.852 [2024-11-29 07:49:53.670000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:03.852 [2024-11-29 07:49:53.670006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:03.852 [2024-11-29 07:49:53.670011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:03.852 [2024-11-29 07:49:53.670016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:03.852 [2024-11-29 07:49:53.670023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.852 [2024-11-29 07:49:53.670029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:03.852 [2024-11-29 07:49:53.670036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:18:03.852 [2024-11-29 07:49:53.670041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.852 [2024-11-29 07:49:53.680083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.852 [2024-11-29 07:49:53.680175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:03.852 [2024-11-29 07:49:53.680222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.008 ms 00:18:03.853 [2024-11-29 07:49:53.680241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.680550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.853 [2024-11-29 07:49:53.680575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:03.853 [2024-11-29 07:49:53.680621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:03.853 [2024-11-29 07:49:53.680637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.715524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.853 [2024-11-29 07:49:53.715609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.853 [2024-11-29 07:49:53.715649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.853 [2024-11-29 07:49:53.715667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.715734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.853 [2024-11-29 07:49:53.715751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.853 [2024-11-29 07:49:53.715767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.853 [2024-11-29 07:49:53.715782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.715873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.853 [2024-11-29 07:49:53.715892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.853 [2024-11-29 07:49:53.715910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.853 [2024-11-29 07:49:53.715959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.716006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.853 [2024-11-29 07:49:53.716022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.853 [2024-11-29 07:49:53.716038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.853 [2024-11-29 07:49:53.716081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.853 [2024-11-29 07:49:53.778670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.853 [2024-11-29 07:49:53.778789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.853 [2024-11-29 07:49:53.778830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.853 [2024-11-29 07:49:53.778848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.111 [2024-11-29 07:49:53.827202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.111 [2024-11-29 07:49:53.827326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.111 [2024-11-29 07:49:53.827365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.111 [2024-11-29 07:49:53.827382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.111 [2024-11-29 07:49:53.827491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.827515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.112 [2024-11-29 07:49:53.827532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.827546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.827617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.827635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.112 [2024-11-29 07:49:53.827652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.827701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.827818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.827837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.112 [2024-11-29 07:49:53.827855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.827872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.827993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.828016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.112 [2024-11-29 07:49:53.828033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.828047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.828100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.828178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.112 [2024-11-29 07:49:53.828198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.828213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.828276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.112 [2024-11-29 07:49:53.828324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.112 [2024-11-29 07:49:53.828344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.112 [2024-11-29 07:49:53.828358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.112 [2024-11-29 07:49:53.828529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.161 ms, result 0 00:18:04.112 true 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75014 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75014 ']' 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75014 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75014 00:18:04.112 killing process with pid 75014 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75014' 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75014 00:18:04.112 07:49:53 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75014 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.244 07:50:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.244 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:12.244 fio-3.35 00:18:12.244 Starting 1 thread 00:18:16.441 00:18:16.441 test: (groupid=0, jobs=1): err= 0: pid=75193: Fri Nov 29 07:50:06 2024 00:18:16.441 read: IOPS=931, BW=61.8MiB/s (64.8MB/s)(255MiB/4116msec) 00:18:16.441 slat (nsec): min=4079, max=21759, avg=5381.44, stdev=1782.26 00:18:16.441 clat (usec): min=259, max=1447, avg=485.20, stdev=139.78 00:18:16.441 lat (usec): min=264, max=1460, avg=490.58, stdev=139.96 00:18:16.441 clat percentiles (usec): 00:18:16.441 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 388], 00:18:16.441 | 30.00th=[ 412], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 519], 00:18:16.441 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 791], 00:18:16.441 | 99.00th=[ 996], 99.50th=[ 1106], 99.90th=[ 1385], 99.95th=[ 1450], 00:18:16.441 | 99.99th=[ 1450] 00:18:16.441 write: IOPS=937, BW=62.3MiB/s (65.3MB/s)(256MiB/4111msec); 0 zone resets 00:18:16.441 slat (usec): min=14, max=171, avg=20.41, stdev= 4.52 00:18:16.441 clat (usec): min=267, max=1750, avg=548.20, stdev=170.96 00:18:16.441 lat (usec): min=291, max=1771, avg=568.61, stdev=170.22 00:18:16.441 clat percentiles (usec): 00:18:16.441 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 424], 00:18:16.441 | 30.00th=[ 486], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 562], 00:18:16.441 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 701], 95.00th=[ 898], 00:18:16.441 | 99.00th=[ 1172], 99.50th=[ 1303], 99.90th=[ 1713], 99.95th=[ 1729], 00:18:16.441 | 99.99th=[ 1745] 00:18:16.441 bw ( KiB/s): min=52632, max=84864, per=100.00%, avg=63784.00, stdev=9593.82, samples=8 00:18:16.441 iops : min= 774, max= 1248, avg=938.00, stdev=141.09, samples=8 00:18:16.441 lat (usec) : 500=43.05%, 750=49.42%, 1000=6.01% 00:18:16.441 lat (msec) : 2=1.52% 00:18:16.441 cpu : usr=99.25%, sys=0.10%, ctx=7, majf=0, minf=1169 00:18:16.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.441 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.441 00:18:16.441 Run status group 0 (all jobs): 00:18:16.441 READ: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=255MiB (267MB), run=4116-4116msec 00:18:16.441 WRITE: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=256MiB (269MB), run=4111-4111msec 00:18:17.826 ----------------------------------------------------- 00:18:17.826 Suppressions used: 00:18:17.826 count bytes template 00:18:17.826 1 5 /usr/src/fio/parse.c 00:18:17.826 1 8 libtcmalloc_minimal.so 00:18:17.826 1 904 libcrypto.so 00:18:17.826 ----------------------------------------------------- 00:18:17.826 00:18:17.826 07:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:17.826 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.826 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.826 07:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.827 07:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:18.087 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:18.087 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:18.087 fio-3.35 00:18:18.087 Starting 2 threads 00:18:50.200 00:18:50.200 first_half: (groupid=0, jobs=1): err= 0: pid=75296: Fri Nov 29 07:50:39 2024 00:18:50.200 read: IOPS=2201, BW=8804KiB/s (9016kB/s)(255MiB/29671msec) 00:18:50.200 slat (nsec): min=3090, max=32212, avg=4880.56, stdev=1112.73 00:18:50.200 clat (usec): min=784, max=383488, avg=45978.16, stdev=33874.76 00:18:50.200 lat (usec): min=788, max=383493, avg=45983.04, stdev=33874.80 00:18:50.200 clat percentiles (msec): 00:18:50.200 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:18:50.200 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 41], 00:18:50.200 | 70.00th=[ 42], 80.00th=[ 46], 90.00th=[ 56], 95.00th=[ 80], 00:18:50.200 | 99.00th=[ 232], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 338], 00:18:50.200 | 99.99th=[ 376] 00:18:50.200 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(256MiB/26008msec); 0 zone resets 00:18:50.200 slat (usec): min=3, max=2546, avg= 6.48, stdev=11.18 00:18:50.200 clat (usec): min=374, max=136518, avg=12108.11, stdev=19056.59 00:18:50.200 lat (usec): min=380, max=136523, avg=12114.59, stdev=19056.58 00:18:50.200 clat percentiles (usec): 00:18:50.200 | 1.00th=[ 832], 5.00th=[ 1287], 10.00th=[ 1729], 20.00th=[ 2933], 00:18:50.200 | 30.00th=[ 4817], 40.00th=[ 6325], 50.00th=[ 7963], 60.00th=[ 9372], 00:18:50.200 | 70.00th=[ 10683], 80.00th=[ 12649], 90.00th=[ 18482], 95.00th=[ 46924], 00:18:50.200 | 99.00th=[111674], 99.50th=[116917], 99.90th=[125305], 99.95th=[130548], 00:18:50.200 | 99.99th=[133694] 00:18:50.200 bw ( KiB/s): min= 80, max=40968, per=96.87%, avg=18724.57, stdev=13012.95, samples=28 00:18:50.200 iops : min= 20, max=10242, avg=4681.14, stdev=3253.24, samples=28 00:18:50.200 lat (usec) : 500=0.01%, 750=0.20%, 1000=0.92% 00:18:50.200 lat (msec) : 2=5.53%, 4=6.73%, 10=19.99%, 20=13.00%, 50=43.42% 00:18:50.200 lat (msec) : 100=7.36%, 250=2.44%, 500=0.41% 00:18:50.200 cpu : usr=99.21%, sys=0.09%, ctx=52, majf=0, minf=5609 00:18:50.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:50.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.200 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.200 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.200 second_half: (groupid=0, jobs=1): err= 0: pid=75297: Fri Nov 29 07:50:39 2024 00:18:50.200 read: IOPS=2183, BW=8733KiB/s (8942kB/s)(255MiB/29947msec) 00:18:50.200 slat (nsec): min=3048, max=44924, avg=4814.58, stdev=1036.05 00:18:50.200 clat (usec): min=1125, max=389321, avg=45305.88, stdev=36152.82 00:18:50.200 lat (usec): min=1131, max=389326, avg=45310.69, stdev=36152.90 00:18:50.200 clat percentiles (msec): 00:18:50.200 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 34], 00:18:50.200 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 40], 00:18:50.200 | 70.00th=[ 42], 80.00th=[ 45], 90.00th=[ 55], 95.00th=[ 75], 00:18:50.200 | 99.00th=[ 253], 99.50th=[ 296], 99.90th=[ 326], 99.95th=[ 342], 00:18:50.200 | 99.99th=[ 384] 00:18:50.200 write: IOPS=2416, BW=9665KiB/s (9897kB/s)(256MiB/27124msec); 0 zone resets 00:18:50.200 slat (usec): min=3, max=523, avg= 6.44, stdev= 3.62 00:18:50.200 clat (usec): min=479, max=133801, avg=13273.18, stdev=20574.62 00:18:50.200 lat (usec): min=483, max=133808, avg=13279.62, stdev=20574.75 00:18:50.200 clat percentiles (usec): 00:18:50.200 | 1.00th=[ 816], 5.00th=[ 1303], 10.00th=[ 1696], 20.00th=[ 2573], 00:18:50.200 | 30.00th=[ 4490], 40.00th=[ 6063], 50.00th=[ 7373], 60.00th=[ 8979], 00:18:50.200 | 70.00th=[ 10421], 80.00th=[ 13304], 90.00th=[ 22676], 95.00th=[ 58983], 00:18:50.200 | 99.00th=[113771], 99.50th=[117965], 99.90th=[124257], 99.95th=[126354], 00:18:50.200 | 99.99th=[131597] 00:18:50.200 bw ( KiB/s): min= 104, max=49712, per=87.49%, avg=16912.52, stdev=13810.17, samples=31 00:18:50.200 iops : min= 26, max=12428, avg=4228.13, stdev=3452.54, samples=31 00:18:50.200 lat (usec) : 500=0.01%, 750=0.27%, 1000=0.83% 00:18:50.200 lat (msec) : 2=5.85%, 4=6.50%, 10=20.69%, 20=11.67%, 50=44.21% 00:18:50.200 lat (msec) : 100=6.98%, 250=2.48%, 500=0.52% 00:18:50.200 cpu : usr=99.41%, sys=0.07%, ctx=39, majf=0, minf=5504 00:18:50.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:50.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.200 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.200 issued rwts: total=65380,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.200 00:18:50.200 Run status group 0 (all jobs): 00:18:50.200 READ: bw=17.0MiB/s (17.9MB/s), 8733KiB/s-8804KiB/s (8942kB/s-9016kB/s), io=511MiB (535MB), run=29671-29947msec 00:18:50.200 WRITE: bw=18.9MiB/s (19.8MB/s), 9665KiB/s-9.84MiB/s (9897kB/s-10.3MB/s), io=512MiB (537MB), run=26008-27124msec 00:18:51.141 ----------------------------------------------------- 00:18:51.141 Suppressions used: 00:18:51.141 count bytes template 00:18:51.141 2 10 /usr/src/fio/parse.c 00:18:51.141 4 384 /usr/src/fio/iolog.c 00:18:51.141 1 8 libtcmalloc_minimal.so 00:18:51.141 1 904 libcrypto.so 00:18:51.141 ----------------------------------------------------- 00:18:51.141 00:18:51.141 07:50:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:51.141 07:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.141 07:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:51.141 07:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:51.401 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:51.401 fio-3.35 00:18:51.401 Starting 1 thread 00:19:09.520 00:19:09.520 test: (groupid=0, jobs=1): err= 0: pid=75670: Fri Nov 29 07:50:58 2024 00:19:09.520 read: IOPS=6831, BW=26.7MiB/s (28.0MB/s)(255MiB/9544msec) 00:19:09.520 slat (nsec): min=3020, max=21339, avg=4923.47, stdev=1142.11 00:19:09.520 clat (usec): min=892, max=38372, avg=18727.81, stdev=2889.71 00:19:09.520 lat (usec): min=901, max=38378, avg=18732.74, stdev=2889.69 00:19:09.520 clat percentiles (usec): 00:19:09.520 | 1.00th=[14877], 5.00th=[15270], 10.00th=[15401], 20.00th=[15795], 00:19:09.520 | 30.00th=[16450], 40.00th=[17695], 50.00th=[18744], 60.00th=[19530], 00:19:09.520 | 70.00th=[20055], 80.00th=[20579], 90.00th=[22152], 95.00th=[23987], 00:19:09.520 | 99.00th=[27657], 99.50th=[30016], 99.90th=[34341], 99.95th=[35390], 00:19:09.520 | 99.99th=[38011] 00:19:09.520 write: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(256MiB/6110msec); 0 zone resets 00:19:09.520 slat (usec): min=4, max=279, avg= 6.47, stdev= 2.85 00:19:09.520 clat (usec): min=514, max=78882, avg=11872.18, stdev=14044.29 00:19:09.520 lat (usec): min=520, max=78888, avg=11878.64, stdev=14044.29 00:19:09.520 clat percentiles (usec): 00:19:09.520 | 1.00th=[ 816], 5.00th=[ 1090], 10.00th=[ 1270], 20.00th=[ 1532], 00:19:09.520 | 30.00th=[ 1811], 40.00th=[ 2638], 50.00th=[ 7635], 60.00th=[ 9372], 00:19:09.520 | 70.00th=[11994], 80.00th=[15533], 90.00th=[39584], 95.00th=[44303], 00:19:09.520 | 99.00th=[50594], 99.50th=[52691], 99.90th=[58459], 99.95th=[61604], 00:19:09.520 | 99.99th=[71828] 00:19:09.520 bw ( KiB/s): min= 6008, max=59288, per=93.98%, avg=40323.08, stdev=12553.95, samples=13 00:19:09.520 iops : min= 1502, max=14822, avg=10080.77, stdev=3138.49, samples=13 00:19:09.520 lat (usec) : 750=0.24%, 1000=1.40% 00:19:09.520 lat (msec) : 2=15.37%, 4=3.83%, 10=11.03%, 20=45.16%, 50=22.41% 00:19:09.520 lat (msec) : 100=0.55% 00:19:09.520 cpu : usr=99.09%, sys=0.17%, ctx=21, majf=0, minf=5565 00:19:09.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:09.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.520 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.520 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 00:19:09.520 Run status group 0 (all jobs): 00:19:09.520 READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=255MiB (267MB), run=9544-9544msec 00:19:09.520 WRITE: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=256MiB (268MB), run=6110-6110msec 00:19:10.092 ----------------------------------------------------- 00:19:10.092 Suppressions used: 00:19:10.092 count bytes template 00:19:10.092 1 5 /usr/src/fio/parse.c 00:19:10.092 2 192 /usr/src/fio/iolog.c 00:19:10.092 1 8 libtcmalloc_minimal.so 00:19:10.092 1 904 libcrypto.so 00:19:10.092 ----------------------------------------------------- 00:19:10.092 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:10.092 Remove shared memory files 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57060 /dev/shm/spdk_tgt_trace.pid73934 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:10.092 ************************************ 00:19:10.092 END TEST ftl_fio_basic 00:19:10.092 ************************************ 00:19:10.092 00:19:10.092 real 1m13.191s 00:19:10.092 user 2m31.904s 00:19:10.092 sys 0m16.203s 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.092 07:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.092 07:50:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:10.092 07:50:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:10.092 07:50:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.092 07:50:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:10.092 ************************************ 00:19:10.092 START TEST ftl_bdevperf 00:19:10.092 ************************************ 00:19:10.092 07:50:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:10.353 * Looking for test storage... 00:19:10.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.353 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.354 --rc genhtml_branch_coverage=1 00:19:10.354 --rc genhtml_function_coverage=1 00:19:10.354 --rc genhtml_legend=1 00:19:10.354 --rc geninfo_all_blocks=1 00:19:10.354 --rc geninfo_unexecuted_blocks=1 00:19:10.354 00:19:10.354 ' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.354 --rc genhtml_branch_coverage=1 00:19:10.354 --rc genhtml_function_coverage=1 00:19:10.354 --rc genhtml_legend=1 00:19:10.354 --rc geninfo_all_blocks=1 00:19:10.354 --rc geninfo_unexecuted_blocks=1 00:19:10.354 00:19:10.354 ' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.354 --rc genhtml_branch_coverage=1 00:19:10.354 --rc genhtml_function_coverage=1 00:19:10.354 --rc genhtml_legend=1 00:19:10.354 --rc geninfo_all_blocks=1 00:19:10.354 --rc geninfo_unexecuted_blocks=1 00:19:10.354 00:19:10.354 ' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.354 --rc genhtml_branch_coverage=1 00:19:10.354 --rc genhtml_function_coverage=1 00:19:10.354 --rc genhtml_legend=1 00:19:10.354 --rc geninfo_all_blocks=1 00:19:10.354 --rc geninfo_unexecuted_blocks=1 00:19:10.354 00:19:10.354 ' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75935 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75935 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75935 ']' 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.354 07:51:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.354 [2024-11-29 07:51:00.219629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:10.354 [2024-11-29 07:51:00.219791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75935 ] 00:19:10.616 [2024-11-29 07:51:00.383927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.616 [2024-11-29 07:51:00.506508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:11.188 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:11.449 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:11.711 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:11.711 { 00:19:11.711 "name": "nvme0n1", 00:19:11.711 "aliases": [ 00:19:11.711 "8c3c92bb-08ed-4db9-b5d3-855c9b3e35e1" 00:19:11.711 ], 00:19:11.711 "product_name": "NVMe disk", 00:19:11.711 "block_size": 4096, 00:19:11.711 "num_blocks": 1310720, 00:19:11.711 "uuid": "8c3c92bb-08ed-4db9-b5d3-855c9b3e35e1", 00:19:11.711 "numa_id": -1, 00:19:11.711 "assigned_rate_limits": { 00:19:11.711 "rw_ios_per_sec": 0, 00:19:11.711 "rw_mbytes_per_sec": 0, 00:19:11.711 "r_mbytes_per_sec": 0, 00:19:11.711 "w_mbytes_per_sec": 0 00:19:11.711 }, 00:19:11.711 "claimed": true, 00:19:11.711 "claim_type": "read_many_write_one", 00:19:11.711 "zoned": false, 00:19:11.711 "supported_io_types": { 00:19:11.711 "read": true, 00:19:11.711 "write": true, 00:19:11.711 "unmap": true, 00:19:11.711 "flush": true, 00:19:11.711 "reset": true, 00:19:11.711 "nvme_admin": true, 00:19:11.711 "nvme_io": true, 00:19:11.711 "nvme_io_md": false, 00:19:11.711 "write_zeroes": true, 00:19:11.711 "zcopy": false, 00:19:11.711 "get_zone_info": false, 00:19:11.711 "zone_management": false, 00:19:11.711 "zone_append": false, 00:19:11.711 "compare": true, 00:19:11.711 "compare_and_write": false, 00:19:11.711 "abort": true, 00:19:11.711 "seek_hole": false, 00:19:11.711 "seek_data": false, 00:19:11.711 "copy": true, 00:19:11.711 "nvme_iov_md": false 00:19:11.711 }, 00:19:11.711 "driver_specific": { 00:19:11.711 "nvme": [ 00:19:11.711 { 00:19:11.711 "pci_address": "0000:00:11.0", 00:19:11.711 "trid": { 00:19:11.711 "trtype": "PCIe", 00:19:11.711 "traddr": "0000:00:11.0" 00:19:11.711 }, 00:19:11.711 "ctrlr_data": { 00:19:11.711 "cntlid": 0, 00:19:11.711 "vendor_id": "0x1b36", 00:19:11.711 "model_number": "QEMU NVMe Ctrl", 00:19:11.711 "serial_number": "12341", 00:19:11.711 "firmware_revision": "8.0.0", 00:19:11.711 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:11.711 "oacs": { 00:19:11.711 "security": 0, 00:19:11.711 "format": 1, 00:19:11.711 "firmware": 0, 00:19:11.711 "ns_manage": 1 00:19:11.711 }, 00:19:11.711 "multi_ctrlr": false, 00:19:11.711 "ana_reporting": false 00:19:11.711 }, 00:19:11.711 "vs": { 00:19:11.711 "nvme_version": "1.4" 00:19:11.711 }, 00:19:11.711 "ns_data": { 00:19:11.711 "id": 1, 00:19:11.711 "can_share": false 00:19:11.711 } 00:19:11.711 } 00:19:11.711 ], 00:19:11.711 "mp_policy": "active_passive" 00:19:11.711 } 00:19:11.711 } 00:19:11.711 ]' 00:19:11.711 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:11.711 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:11.711 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=711b3575-e2e8-44e7-bcaa-14176a34a32c 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:11.973 07:51:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 711b3575-e2e8-44e7-bcaa-14176a34a32c 00:19:12.234 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:12.499 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=de42bfa3-22af-4781-8413-1519290d6e89 00:19:12.500 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u de42bfa3-22af-4781-8413-1519290d6e89 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=6ab9f368-3028-4223-b867-175270c90b52 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6ab9f368-3028-4223-b867-175270c90b52 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=6ab9f368-3028-4223-b867-175270c90b52 00:19:12.849 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 6ab9f368-3028-4223-b867-175270c90b52 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6ab9f368-3028-4223-b867-175270c90b52 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab9f368-3028-4223-b867-175270c90b52 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:12.850 { 00:19:12.850 "name": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:12.850 "aliases": [ 00:19:12.850 "lvs/nvme0n1p0" 00:19:12.850 ], 00:19:12.850 "product_name": "Logical Volume", 00:19:12.850 "block_size": 4096, 00:19:12.850 "num_blocks": 26476544, 00:19:12.850 "uuid": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:12.850 "assigned_rate_limits": { 00:19:12.850 "rw_ios_per_sec": 0, 00:19:12.850 "rw_mbytes_per_sec": 0, 00:19:12.850 "r_mbytes_per_sec": 0, 00:19:12.850 "w_mbytes_per_sec": 0 00:19:12.850 }, 00:19:12.850 "claimed": false, 00:19:12.850 "zoned": false, 00:19:12.850 "supported_io_types": { 00:19:12.850 "read": true, 00:19:12.850 "write": true, 00:19:12.850 "unmap": true, 00:19:12.850 "flush": false, 00:19:12.850 "reset": true, 00:19:12.850 "nvme_admin": false, 00:19:12.850 "nvme_io": false, 00:19:12.850 "nvme_io_md": false, 00:19:12.850 "write_zeroes": true, 00:19:12.850 "zcopy": false, 00:19:12.850 "get_zone_info": false, 00:19:12.850 "zone_management": false, 00:19:12.850 "zone_append": false, 00:19:12.850 "compare": false, 00:19:12.850 "compare_and_write": false, 00:19:12.850 "abort": false, 00:19:12.850 "seek_hole": true, 00:19:12.850 "seek_data": true, 00:19:12.850 "copy": false, 00:19:12.850 "nvme_iov_md": false 00:19:12.850 }, 00:19:12.850 "driver_specific": { 00:19:12.850 "lvol": { 00:19:12.850 "lvol_store_uuid": "de42bfa3-22af-4781-8413-1519290d6e89", 00:19:12.850 "base_bdev": "nvme0n1", 00:19:12.850 "thin_provision": true, 00:19:12.850 "num_allocated_clusters": 0, 00:19:12.850 "snapshot": false, 00:19:12.850 "clone": false, 00:19:12.850 "esnap_clone": false 00:19:12.850 } 00:19:12.850 } 00:19:12.850 } 00:19:12.850 ]' 00:19:12.850 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:13.133 07:51:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 6ab9f368-3028-4223-b867-175270c90b52 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6ab9f368-3028-4223-b867-175270c90b52 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab9f368-3028-4223-b867-175270c90b52 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:13.392 { 00:19:13.392 "name": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:13.392 "aliases": [ 00:19:13.392 "lvs/nvme0n1p0" 00:19:13.392 ], 00:19:13.392 "product_name": "Logical Volume", 00:19:13.392 "block_size": 4096, 00:19:13.392 "num_blocks": 26476544, 00:19:13.392 "uuid": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:13.392 "assigned_rate_limits": { 00:19:13.392 "rw_ios_per_sec": 0, 00:19:13.392 "rw_mbytes_per_sec": 0, 00:19:13.392 "r_mbytes_per_sec": 0, 00:19:13.392 "w_mbytes_per_sec": 0 00:19:13.392 }, 00:19:13.392 "claimed": false, 00:19:13.392 "zoned": false, 00:19:13.392 "supported_io_types": { 00:19:13.392 "read": true, 00:19:13.392 "write": true, 00:19:13.392 "unmap": true, 00:19:13.392 "flush": false, 00:19:13.392 "reset": true, 00:19:13.392 "nvme_admin": false, 00:19:13.392 "nvme_io": false, 00:19:13.392 "nvme_io_md": false, 00:19:13.392 "write_zeroes": true, 00:19:13.392 "zcopy": false, 00:19:13.392 "get_zone_info": false, 00:19:13.392 "zone_management": false, 00:19:13.392 "zone_append": false, 00:19:13.392 "compare": false, 00:19:13.392 "compare_and_write": false, 00:19:13.392 "abort": false, 00:19:13.392 "seek_hole": true, 00:19:13.392 "seek_data": true, 00:19:13.392 "copy": false, 00:19:13.392 "nvme_iov_md": false 00:19:13.392 }, 00:19:13.392 "driver_specific": { 00:19:13.392 "lvol": { 00:19:13.392 "lvol_store_uuid": "de42bfa3-22af-4781-8413-1519290d6e89", 00:19:13.392 "base_bdev": "nvme0n1", 00:19:13.392 "thin_provision": true, 00:19:13.392 "num_allocated_clusters": 0, 00:19:13.392 "snapshot": false, 00:19:13.392 "clone": false, 00:19:13.392 "esnap_clone": false 00:19:13.392 } 00:19:13.392 } 00:19:13.392 } 00:19:13.392 ]' 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:13.392 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 6ab9f368-3028-4223-b867-175270c90b52 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6ab9f368-3028-4223-b867-175270c90b52 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:13.650 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab9f368-3028-4223-b867-175270c90b52 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:13.909 { 00:19:13.909 "name": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:13.909 "aliases": [ 00:19:13.909 "lvs/nvme0n1p0" 00:19:13.909 ], 00:19:13.909 "product_name": "Logical Volume", 00:19:13.909 "block_size": 4096, 00:19:13.909 "num_blocks": 26476544, 00:19:13.909 "uuid": "6ab9f368-3028-4223-b867-175270c90b52", 00:19:13.909 "assigned_rate_limits": { 00:19:13.909 "rw_ios_per_sec": 0, 00:19:13.909 "rw_mbytes_per_sec": 0, 00:19:13.909 "r_mbytes_per_sec": 0, 00:19:13.909 "w_mbytes_per_sec": 0 00:19:13.909 }, 00:19:13.909 "claimed": false, 00:19:13.909 "zoned": false, 00:19:13.909 "supported_io_types": { 00:19:13.909 "read": true, 00:19:13.909 "write": true, 00:19:13.909 "unmap": true, 00:19:13.909 "flush": false, 00:19:13.909 "reset": true, 00:19:13.909 "nvme_admin": false, 00:19:13.909 "nvme_io": false, 00:19:13.909 "nvme_io_md": false, 00:19:13.909 "write_zeroes": true, 00:19:13.909 "zcopy": false, 00:19:13.909 "get_zone_info": false, 00:19:13.909 "zone_management": false, 00:19:13.909 "zone_append": false, 00:19:13.909 "compare": false, 00:19:13.909 "compare_and_write": false, 00:19:13.909 "abort": false, 00:19:13.909 "seek_hole": true, 00:19:13.909 "seek_data": true, 00:19:13.909 "copy": false, 00:19:13.909 "nvme_iov_md": false 00:19:13.909 }, 00:19:13.909 "driver_specific": { 00:19:13.909 "lvol": { 00:19:13.909 "lvol_store_uuid": "de42bfa3-22af-4781-8413-1519290d6e89", 00:19:13.909 "base_bdev": "nvme0n1", 00:19:13.909 "thin_provision": true, 00:19:13.909 "num_allocated_clusters": 0, 00:19:13.909 "snapshot": false, 00:19:13.909 "clone": false, 00:19:13.909 "esnap_clone": false 00:19:13.909 } 00:19:13.909 } 00:19:13.909 } 00:19:13.909 ]' 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:13.909 07:51:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6ab9f368-3028-4223-b867-175270c90b52 -c nvc0n1p0 --l2p_dram_limit 20 00:19:14.169 [2024-11-29 07:51:04.007941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.007986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.169 [2024-11-29 07:51:04.007997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.169 [2024-11-29 07:51:04.008007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.008051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.008060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.169 [2024-11-29 07:51:04.008066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:14.169 [2024-11-29 07:51:04.008073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.008087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.169 [2024-11-29 07:51:04.008681] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.169 [2024-11-29 07:51:04.008700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.008707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.169 [2024-11-29 07:51:04.008714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:19:14.169 [2024-11-29 07:51:04.008721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.008794] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 74f7ca33-fb85-4891-bf87-a79761decf4b 00:19:14.169 [2024-11-29 07:51:04.009758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.009783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:14.169 [2024-11-29 07:51:04.009795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:14.169 [2024-11-29 07:51:04.009802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.014549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.014576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.169 [2024-11-29 07:51:04.014585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.717 ms 00:19:14.169 [2024-11-29 07:51:04.014593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.014658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.014666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.169 [2024-11-29 07:51:04.014675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:14.169 [2024-11-29 07:51:04.014681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.014718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.014725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.169 [2024-11-29 07:51:04.014733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:14.169 [2024-11-29 07:51:04.014738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.014756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.169 [2024-11-29 07:51:04.017729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.017758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.169 [2024-11-29 07:51:04.017765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:19:14.169 [2024-11-29 07:51:04.017775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.017798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.017806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.169 [2024-11-29 07:51:04.017812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:14.169 [2024-11-29 07:51:04.017819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.017836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:14.169 [2024-11-29 07:51:04.017942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.169 [2024-11-29 07:51:04.017952] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.169 [2024-11-29 07:51:04.017961] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:14.169 [2024-11-29 07:51:04.017969] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.169 [2024-11-29 07:51:04.017977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.169 [2024-11-29 07:51:04.017983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:14.169 [2024-11-29 07:51:04.017990] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.169 [2024-11-29 07:51:04.017995] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.169 [2024-11-29 07:51:04.018002] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.169 [2024-11-29 07:51:04.018009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.018018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.169 [2024-11-29 07:51:04.018024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:19:14.169 [2024-11-29 07:51:04.018030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.018092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.169 [2024-11-29 07:51:04.018100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.169 [2024-11-29 07:51:04.018105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:14.169 [2024-11-29 07:51:04.018113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.169 [2024-11-29 07:51:04.018181] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.170 [2024-11-29 07:51:04.018190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.170 [2024-11-29 07:51:04.018196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.170 [2024-11-29 07:51:04.018215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.170 [2024-11-29 07:51:04.018231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.170 [2024-11-29 07:51:04.018243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.170 [2024-11-29 07:51:04.018254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:14.170 [2024-11-29 07:51:04.018259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.170 [2024-11-29 07:51:04.018265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.170 [2024-11-29 07:51:04.018270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:14.170 [2024-11-29 07:51:04.018280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.170 [2024-11-29 07:51:04.018292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.170 [2024-11-29 07:51:04.018308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.170 [2024-11-29 07:51:04.018326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.170 [2024-11-29 07:51:04.018342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.170 [2024-11-29 07:51:04.018359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.170 [2024-11-29 07:51:04.018376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.170 [2024-11-29 07:51:04.018386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.170 [2024-11-29 07:51:04.018392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:14.170 [2024-11-29 07:51:04.018397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.170 [2024-11-29 07:51:04.018403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.170 [2024-11-29 07:51:04.018408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:14.170 [2024-11-29 07:51:04.018414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.170 [2024-11-29 07:51:04.018424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:14.170 [2024-11-29 07:51:04.018429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018436] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.170 [2024-11-29 07:51:04.018451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.170 [2024-11-29 07:51:04.018458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.170 [2024-11-29 07:51:04.018472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.170 [2024-11-29 07:51:04.018478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.170 [2024-11-29 07:51:04.018484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.170 [2024-11-29 07:51:04.018489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.170 [2024-11-29 07:51:04.018495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.170 [2024-11-29 07:51:04.018500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.170 [2024-11-29 07:51:04.018509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.170 [2024-11-29 07:51:04.018516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:14.170 [2024-11-29 07:51:04.018530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:14.170 [2024-11-29 07:51:04.018537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:14.170 [2024-11-29 07:51:04.018542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:14.170 [2024-11-29 07:51:04.018549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:14.170 [2024-11-29 07:51:04.018554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:14.170 [2024-11-29 07:51:04.018561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:14.170 [2024-11-29 07:51:04.018567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:14.170 [2024-11-29 07:51:04.018574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:14.170 [2024-11-29 07:51:04.018579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:14.170 [2024-11-29 07:51:04.018610] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.170 [2024-11-29 07:51:04.018616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.170 [2024-11-29 07:51:04.018636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.170 [2024-11-29 07:51:04.018642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.170 [2024-11-29 07:51:04.018648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.170 [2024-11-29 07:51:04.018655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.170 [2024-11-29 07:51:04.018661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.170 [2024-11-29 07:51:04.018668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:19:14.170 [2024-11-29 07:51:04.018673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.170 [2024-11-29 07:51:04.018710] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:14.170 [2024-11-29 07:51:04.018718] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:18.377 [2024-11-29 07:51:07.758896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.758986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:18.377 [2024-11-29 07:51:07.759007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3740.156 ms 00:19:18.377 [2024-11-29 07:51:07.759017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.791156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.791215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.377 [2024-11-29 07:51:07.791231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.888 ms 00:19:18.377 [2024-11-29 07:51:07.791241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.791387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.791399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.377 [2024-11-29 07:51:07.791414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:18.377 [2024-11-29 07:51:07.791422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.838648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.838708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.377 [2024-11-29 07:51:07.838726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.170 ms 00:19:18.377 [2024-11-29 07:51:07.838735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.838782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.838792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.377 [2024-11-29 07:51:07.838803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:18.377 [2024-11-29 07:51:07.838814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.839427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.839486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.377 [2024-11-29 07:51:07.839501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:18.377 [2024-11-29 07:51:07.839509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.839635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.839646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.377 [2024-11-29 07:51:07.839660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:18.377 [2024-11-29 07:51:07.839668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.855693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.855743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.377 [2024-11-29 07:51:07.855756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.003 ms 00:19:18.377 [2024-11-29 07:51:07.855773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.868977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:18.377 [2024-11-29 07:51:07.876209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.876259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.377 [2024-11-29 07:51:07.876270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.335 ms 00:19:18.377 [2024-11-29 07:51:07.876281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.970934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.971000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:18.377 [2024-11-29 07:51:07.971017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.622 ms 00:19:18.377 [2024-11-29 07:51:07.971029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.971215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.971232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:18.377 [2024-11-29 07:51:07.971241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:18.377 [2024-11-29 07:51:07.971254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:07.997010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:07.997066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:18.377 [2024-11-29 07:51:07.997080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:19:18.377 [2024-11-29 07:51:07.997091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:08.022312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:08.022358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:18.377 [2024-11-29 07:51:08.022371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.193 ms 00:19:18.377 [2024-11-29 07:51:08.022381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:08.022982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:08.023006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.377 [2024-11-29 07:51:08.023017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:19:18.377 [2024-11-29 07:51:08.023027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:08.105705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.377 [2024-11-29 07:51:08.105769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:18.377 [2024-11-29 07:51:08.105783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.637 ms 00:19:18.377 [2024-11-29 07:51:08.105794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.377 [2024-11-29 07:51:08.133504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-11-29 07:51:08.133564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:18.378 [2024-11-29 07:51:08.133581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.617 ms 00:19:18.378 [2024-11-29 07:51:08.133592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-11-29 07:51:08.159607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-11-29 07:51:08.159664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:18.378 [2024-11-29 07:51:08.159676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.965 ms 00:19:18.378 [2024-11-29 07:51:08.159686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-11-29 07:51:08.185742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-11-29 07:51:08.185797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:18.378 [2024-11-29 07:51:08.185810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.009 ms 00:19:18.378 [2024-11-29 07:51:08.185820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-11-29 07:51:08.185872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-11-29 07:51:08.185888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:18.378 [2024-11-29 07:51:08.185897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:18.378 [2024-11-29 07:51:08.185908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-11-29 07:51:08.185999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.378 [2024-11-29 07:51:08.186012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:18.378 [2024-11-29 07:51:08.186021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:18.378 [2024-11-29 07:51:08.186031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.378 [2024-11-29 07:51:08.187162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4178.714 ms, result 0 00:19:18.378 { 00:19:18.378 "name": "ftl0", 00:19:18.378 "uuid": "74f7ca33-fb85-4891-bf87-a79761decf4b" 00:19:18.378 } 00:19:18.378 07:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:18.378 07:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:18.378 07:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:18.640 07:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:18.640 [2024-11-29 07:51:08.527348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:18.640 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:18.640 Zero copy mechanism will not be used. 00:19:18.640 Running I/O for 4 seconds... 00:19:20.603 722.00 IOPS, 47.95 MiB/s [2024-11-29T07:51:11.934Z] 735.50 IOPS, 48.84 MiB/s [2024-11-29T07:51:12.878Z] 820.00 IOPS, 54.45 MiB/s [2024-11-29T07:51:12.878Z] 787.50 IOPS, 52.29 MiB/s 00:19:22.934 Latency(us) 00:19:22.934 [2024-11-29T07:51:12.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.934 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:22.934 ftl0 : 4.00 787.40 52.29 0.00 0.00 1344.33 305.62 3251.59 00:19:22.934 [2024-11-29T07:51:12.878Z] =================================================================================================================== 00:19:22.934 [2024-11-29T07:51:12.878Z] Total : 787.40 52.29 0.00 0.00 1344.33 305.62 3251.59 00:19:22.934 [2024-11-29 07:51:12.537917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:22.934 { 00:19:22.934 "results": [ 00:19:22.934 { 00:19:22.934 "job": "ftl0", 00:19:22.934 "core_mask": "0x1", 00:19:22.934 "workload": "randwrite", 00:19:22.934 "status": "finished", 00:19:22.934 "queue_depth": 1, 00:19:22.934 "io_size": 69632, 00:19:22.934 "runtime": 4.001792, 00:19:22.934 "iops": 787.3972460337769, 00:19:22.934 "mibps": 52.2880983694305, 00:19:22.934 "io_failed": 0, 00:19:22.934 "io_timeout": 0, 00:19:22.934 "avg_latency_us": 1344.3307882723432, 00:19:22.934 "min_latency_us": 305.62461538461537, 00:19:22.934 "max_latency_us": 3251.5938461538462 00:19:22.934 } 00:19:22.934 ], 00:19:22.934 "core_count": 1 00:19:22.934 } 00:19:22.934 07:51:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:22.934 [2024-11-29 07:51:12.650665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:22.934 Running I/O for 4 seconds... 00:19:24.820 5772.00 IOPS, 22.55 MiB/s [2024-11-29T07:51:15.709Z] 5211.00 IOPS, 20.36 MiB/s [2024-11-29T07:51:17.096Z] 5029.67 IOPS, 19.65 MiB/s [2024-11-29T07:51:17.096Z] 4933.50 IOPS, 19.27 MiB/s 00:19:27.152 Latency(us) 00:19:27.152 [2024-11-29T07:51:17.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.152 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:27.152 ftl0 : 4.04 4916.51 19.21 0.00 0.00 25909.70 582.89 45371.08 00:19:27.152 [2024-11-29T07:51:17.096Z] =================================================================================================================== 00:19:27.152 [2024-11-29T07:51:17.096Z] Total : 4916.51 19.21 0.00 0.00 25909.70 0.00 45371.08 00:19:27.152 [2024-11-29 07:51:16.699588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:27.152 { 00:19:27.152 "results": [ 00:19:27.152 { 00:19:27.152 "job": "ftl0", 00:19:27.152 "core_mask": "0x1", 00:19:27.152 "workload": "randwrite", 00:19:27.152 "status": "finished", 00:19:27.152 "queue_depth": 128, 00:19:27.152 "io_size": 4096, 00:19:27.152 "runtime": 4.038634, 00:19:27.152 "iops": 4916.513850970402, 00:19:27.152 "mibps": 19.205132230353133, 00:19:27.152 "io_failed": 0, 00:19:27.152 "io_timeout": 0, 00:19:27.152 "avg_latency_us": 25909.70128804314, 00:19:27.152 "min_latency_us": 582.8923076923077, 00:19:27.152 "max_latency_us": 45371.07692307692 00:19:27.152 } 00:19:27.152 ], 00:19:27.152 "core_count": 1 00:19:27.152 } 00:19:27.152 07:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:27.152 [2024-11-29 07:51:16.816097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:27.152 Running I/O for 4 seconds... 00:19:29.042 4232.00 IOPS, 16.53 MiB/s [2024-11-29T07:51:19.930Z] 4400.50 IOPS, 17.19 MiB/s [2024-11-29T07:51:20.874Z] 4411.33 IOPS, 17.23 MiB/s [2024-11-29T07:51:20.874Z] 4436.50 IOPS, 17.33 MiB/s 00:19:30.930 Latency(us) 00:19:30.930 [2024-11-29T07:51:20.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.930 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:30.930 Verification LBA range: start 0x0 length 0x1400000 00:19:30.930 ftl0 : 4.02 4451.41 17.39 0.00 0.00 28673.81 389.12 41539.74 00:19:30.930 [2024-11-29T07:51:20.874Z] =================================================================================================================== 00:19:30.930 [2024-11-29T07:51:20.874Z] Total : 4451.41 17.39 0.00 0.00 28673.81 0.00 41539.74 00:19:30.930 { 00:19:30.930 "results": [ 00:19:30.930 { 00:19:30.930 "job": "ftl0", 00:19:30.930 "core_mask": "0x1", 00:19:30.930 "workload": "verify", 00:19:30.930 "status": "finished", 00:19:30.930 "verify_range": { 00:19:30.930 "start": 0, 00:19:30.930 "length": 20971520 00:19:30.930 }, 00:19:30.930 "queue_depth": 128, 00:19:30.930 "io_size": 4096, 00:19:30.930 "runtime": 4.015358, 00:19:30.930 "iops": 4451.408815851538, 00:19:30.930 "mibps": 17.38831568692007, 00:19:30.930 "io_failed": 0, 00:19:30.930 "io_timeout": 0, 00:19:30.930 "avg_latency_us": 28673.808422375434, 00:19:30.930 "min_latency_us": 389.12, 00:19:30.930 "max_latency_us": 41539.74153846154 00:19:30.930 } 00:19:30.930 ], 00:19:30.930 "core_count": 1 00:19:30.930 } 00:19:30.930 [2024-11-29 07:51:20.848643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:30.930 07:51:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:31.191 [2024-11-29 07:51:21.067939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.191 [2024-11-29 07:51:21.068177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.191 [2024-11-29 07:51:21.068201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.192 [2024-11-29 07:51:21.068214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.192 [2024-11-29 07:51:21.068245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.192 [2024-11-29 07:51:21.071259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.192 [2024-11-29 07:51:21.071438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.192 [2024-11-29 07:51:21.071483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:19:31.192 [2024-11-29 07:51:21.071493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.192 [2024-11-29 07:51:21.074236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.192 [2024-11-29 07:51:21.074290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.192 [2024-11-29 07:51:21.074307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:19:31.192 [2024-11-29 07:51:21.074316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.297395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.297654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.454 [2024-11-29 07:51:21.297691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 223.051 ms 00:19:31.454 [2024-11-29 07:51:21.297700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.303961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.304013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.454 [2024-11-29 07:51:21.304029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:19:31.454 [2024-11-29 07:51:21.304041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.331436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.331499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.454 [2024-11-29 07:51:21.331516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.311 ms 00:19:31.454 [2024-11-29 07:51:21.331524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.349056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.349114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.454 [2024-11-29 07:51:21.349130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.474 ms 00:19:31.454 [2024-11-29 07:51:21.349139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.349305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.349318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.454 [2024-11-29 07:51:21.349334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:31.454 [2024-11-29 07:51:21.349342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.454 [2024-11-29 07:51:21.376103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.454 [2024-11-29 07:51:21.376158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.454 [2024-11-29 07:51:21.376174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.742 ms 00:19:31.454 [2024-11-29 07:51:21.376181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.717 [2024-11-29 07:51:21.402496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.717 [2024-11-29 07:51:21.402696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.717 [2024-11-29 07:51:21.402724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.257 ms 00:19:31.717 [2024-11-29 07:51:21.402732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.717 [2024-11-29 07:51:21.428431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.717 [2024-11-29 07:51:21.428503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.717 [2024-11-29 07:51:21.428519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.623 ms 00:19:31.717 [2024-11-29 07:51:21.428526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.717 [2024-11-29 07:51:21.454363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.717 [2024-11-29 07:51:21.454585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.717 [2024-11-29 07:51:21.454617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.721 ms 00:19:31.717 [2024-11-29 07:51:21.454624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.717 [2024-11-29 07:51:21.454668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.717 [2024-11-29 07:51:21.454685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.717 [2024-11-29 07:51:21.454966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.454976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.454986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.454994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.718 [2024-11-29 07:51:21.455597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.719 [2024-11-29 07:51:21.455609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.719 [2024-11-29 07:51:21.455625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.719 [2024-11-29 07:51:21.455636] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 74f7ca33-fb85-4891-bf87-a79761decf4b 00:19:31.719 [2024-11-29 07:51:21.455655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.719 [2024-11-29 07:51:21.455665] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.719 [2024-11-29 07:51:21.455672] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.719 [2024-11-29 07:51:21.455682] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.719 [2024-11-29 07:51:21.455689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.719 [2024-11-29 07:51:21.455699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.719 [2024-11-29 07:51:21.455706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.719 [2024-11-29 07:51:21.455717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.719 [2024-11-29 07:51:21.455724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.719 [2024-11-29 07:51:21.455734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.719 [2024-11-29 07:51:21.455741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.719 [2024-11-29 07:51:21.455752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:19:31.719 [2024-11-29 07:51:21.455764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.469369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.719 [2024-11-29 07:51:21.469418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.719 [2024-11-29 07:51:21.469433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.538 ms 00:19:31.719 [2024-11-29 07:51:21.469441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.469885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.719 [2024-11-29 07:51:21.469904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.719 [2024-11-29 07:51:21.469916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:19:31.719 [2024-11-29 07:51:21.469924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.509135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.719 [2024-11-29 07:51:21.509352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.719 [2024-11-29 07:51:21.509383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.719 [2024-11-29 07:51:21.509392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.509488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.719 [2024-11-29 07:51:21.509499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.719 [2024-11-29 07:51:21.509510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.719 [2024-11-29 07:51:21.509519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.509636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.719 [2024-11-29 07:51:21.509649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.719 [2024-11-29 07:51:21.509661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.719 [2024-11-29 07:51:21.509669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.509687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.719 [2024-11-29 07:51:21.509696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.719 [2024-11-29 07:51:21.509707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.719 [2024-11-29 07:51:21.509714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.719 [2024-11-29 07:51:21.596423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.719 [2024-11-29 07:51:21.596504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.719 [2024-11-29 07:51:21.596525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.719 [2024-11-29 07:51:21.596533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.667676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.667891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.980 [2024-11-29 07:51:21.667919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.667929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.980 [2024-11-29 07:51:21.668048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.980 [2024-11-29 07:51:21.668147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.980 [2024-11-29 07:51:21.668295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.980 [2024-11-29 07:51:21.668369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.980 [2024-11-29 07:51:21.668441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.980 [2024-11-29 07:51:21.668551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.980 [2024-11-29 07:51:21.668562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.980 [2024-11-29 07:51:21.668570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.980 [2024-11-29 07:51:21.668721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.730 ms, result 0 00:19:31.980 true 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75935 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75935 ']' 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75935 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75935 00:19:31.980 killing process with pid 75935 00:19:31.980 Received shutdown signal, test time was about 4.000000 seconds 00:19:31.980 00:19:31.980 Latency(us) 00:19:31.980 [2024-11-29T07:51:21.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.980 [2024-11-29T07:51:21.924Z] =================================================================================================================== 00:19:31.980 [2024-11-29T07:51:21.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75935' 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75935 00:19:31.980 07:51:21 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75935 00:19:37.275 Remove shared memory files 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:37.275 ************************************ 00:19:37.275 END TEST ftl_bdevperf 00:19:37.275 ************************************ 00:19:37.275 00:19:37.275 real 0m27.177s 00:19:37.275 user 0m29.662s 00:19:37.275 sys 0m1.089s 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.275 07:51:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:37.275 07:51:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:37.275 07:51:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:37.275 07:51:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.275 07:51:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:37.536 ************************************ 00:19:37.536 START TEST ftl_trim 00:19:37.536 ************************************ 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:37.536 * Looking for test storage... 00:19:37.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.536 07:51:27 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.536 --rc genhtml_branch_coverage=1 00:19:37.536 --rc genhtml_function_coverage=1 00:19:37.536 --rc genhtml_legend=1 00:19:37.536 --rc geninfo_all_blocks=1 00:19:37.536 --rc geninfo_unexecuted_blocks=1 00:19:37.536 00:19:37.536 ' 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.536 --rc genhtml_branch_coverage=1 00:19:37.536 --rc genhtml_function_coverage=1 00:19:37.536 --rc genhtml_legend=1 00:19:37.536 --rc geninfo_all_blocks=1 00:19:37.536 --rc geninfo_unexecuted_blocks=1 00:19:37.536 00:19:37.536 ' 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.536 --rc genhtml_branch_coverage=1 00:19:37.536 --rc genhtml_function_coverage=1 00:19:37.536 --rc genhtml_legend=1 00:19:37.536 --rc geninfo_all_blocks=1 00:19:37.536 --rc geninfo_unexecuted_blocks=1 00:19:37.536 00:19:37.536 ' 00:19:37.536 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.536 --rc genhtml_branch_coverage=1 00:19:37.536 --rc genhtml_function_coverage=1 00:19:37.536 --rc genhtml_legend=1 00:19:37.536 --rc geninfo_all_blocks=1 00:19:37.536 --rc geninfo_unexecuted_blocks=1 00:19:37.536 00:19:37.536 ' 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.536 07:51:27 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76293 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76293 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76293 ']' 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.537 07:51:27 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.537 07:51:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 [2024-11-29 07:51:27.511498] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:37.798 [2024-11-29 07:51:27.511887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76293 ] 00:19:37.798 [2024-11-29 07:51:27.679994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:38.060 [2024-11-29 07:51:27.805403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.060 [2024-11-29 07:51:27.805682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.060 [2024-11-29 07:51:27.805754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:39.004 07:51:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:39.004 07:51:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:39.267 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:39.267 { 00:19:39.267 "name": "nvme0n1", 00:19:39.267 "aliases": [ 00:19:39.267 "65bf4e3e-7af4-4fe6-ab88-a2a11e645908" 00:19:39.267 ], 00:19:39.267 "product_name": "NVMe disk", 00:19:39.267 "block_size": 4096, 00:19:39.267 "num_blocks": 1310720, 00:19:39.267 "uuid": "65bf4e3e-7af4-4fe6-ab88-a2a11e645908", 00:19:39.267 "numa_id": -1, 00:19:39.267 "assigned_rate_limits": { 00:19:39.267 "rw_ios_per_sec": 0, 00:19:39.267 "rw_mbytes_per_sec": 0, 00:19:39.267 "r_mbytes_per_sec": 0, 00:19:39.267 "w_mbytes_per_sec": 0 00:19:39.267 }, 00:19:39.267 "claimed": true, 00:19:39.267 "claim_type": "read_many_write_one", 00:19:39.267 "zoned": false, 00:19:39.267 "supported_io_types": { 00:19:39.267 "read": true, 00:19:39.267 "write": true, 00:19:39.267 "unmap": true, 00:19:39.267 "flush": true, 00:19:39.267 "reset": true, 00:19:39.267 "nvme_admin": true, 00:19:39.267 "nvme_io": true, 00:19:39.267 "nvme_io_md": false, 00:19:39.267 "write_zeroes": true, 00:19:39.267 "zcopy": false, 00:19:39.267 "get_zone_info": false, 00:19:39.267 "zone_management": false, 00:19:39.267 "zone_append": false, 00:19:39.267 "compare": true, 00:19:39.267 "compare_and_write": false, 00:19:39.267 "abort": true, 00:19:39.267 "seek_hole": false, 00:19:39.267 "seek_data": false, 00:19:39.267 "copy": true, 00:19:39.267 "nvme_iov_md": false 00:19:39.267 }, 00:19:39.267 "driver_specific": { 00:19:39.267 "nvme": [ 00:19:39.267 { 00:19:39.267 "pci_address": "0000:00:11.0", 00:19:39.267 "trid": { 00:19:39.267 "trtype": "PCIe", 00:19:39.267 "traddr": "0000:00:11.0" 00:19:39.267 }, 00:19:39.267 "ctrlr_data": { 00:19:39.267 "cntlid": 0, 00:19:39.267 "vendor_id": "0x1b36", 00:19:39.267 "model_number": "QEMU NVMe Ctrl", 00:19:39.267 "serial_number": "12341", 00:19:39.267 "firmware_revision": "8.0.0", 00:19:39.267 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:39.267 "oacs": { 00:19:39.267 "security": 0, 00:19:39.267 "format": 1, 00:19:39.267 "firmware": 0, 00:19:39.267 "ns_manage": 1 00:19:39.267 }, 00:19:39.267 "multi_ctrlr": false, 00:19:39.267 "ana_reporting": false 00:19:39.267 }, 00:19:39.267 "vs": { 00:19:39.267 "nvme_version": "1.4" 00:19:39.267 }, 00:19:39.267 "ns_data": { 00:19:39.267 "id": 1, 00:19:39.267 "can_share": false 00:19:39.267 } 00:19:39.267 } 00:19:39.267 ], 00:19:39.267 "mp_policy": "active_passive" 00:19:39.267 } 00:19:39.267 } 00:19:39.267 ]' 00:19:39.267 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:39.268 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:39.268 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:39.268 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:39.268 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:39.268 07:51:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:39.268 07:51:29 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:39.268 07:51:29 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:39.268 07:51:29 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:39.268 07:51:29 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:39.268 07:51:29 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:39.530 07:51:29 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=de42bfa3-22af-4781-8413-1519290d6e89 00:19:39.530 07:51:29 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:39.530 07:51:29 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u de42bfa3-22af-4781-8413-1519290d6e89 00:19:39.789 07:51:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:40.047 07:51:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=63f142c5-da60-41a5-8119-e6e805144b63 00:19:40.047 07:51:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63f142c5-da60-41a5-8119-e6e805144b63 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:40.306 07:51:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.306 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.306 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:40.306 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:40.306 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:40.306 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:40.565 { 00:19:40.565 "name": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:40.565 "aliases": [ 00:19:40.565 "lvs/nvme0n1p0" 00:19:40.565 ], 00:19:40.565 "product_name": "Logical Volume", 00:19:40.565 "block_size": 4096, 00:19:40.565 "num_blocks": 26476544, 00:19:40.565 "uuid": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:40.565 "assigned_rate_limits": { 00:19:40.565 "rw_ios_per_sec": 0, 00:19:40.565 "rw_mbytes_per_sec": 0, 00:19:40.565 "r_mbytes_per_sec": 0, 00:19:40.565 "w_mbytes_per_sec": 0 00:19:40.565 }, 00:19:40.565 "claimed": false, 00:19:40.565 "zoned": false, 00:19:40.565 "supported_io_types": { 00:19:40.565 "read": true, 00:19:40.565 "write": true, 00:19:40.565 "unmap": true, 00:19:40.565 "flush": false, 00:19:40.565 "reset": true, 00:19:40.565 "nvme_admin": false, 00:19:40.565 "nvme_io": false, 00:19:40.565 "nvme_io_md": false, 00:19:40.565 "write_zeroes": true, 00:19:40.565 "zcopy": false, 00:19:40.565 "get_zone_info": false, 00:19:40.565 "zone_management": false, 00:19:40.565 "zone_append": false, 00:19:40.565 "compare": false, 00:19:40.565 "compare_and_write": false, 00:19:40.565 "abort": false, 00:19:40.565 "seek_hole": true, 00:19:40.565 "seek_data": true, 00:19:40.565 "copy": false, 00:19:40.565 "nvme_iov_md": false 00:19:40.565 }, 00:19:40.565 "driver_specific": { 00:19:40.565 "lvol": { 00:19:40.565 "lvol_store_uuid": "63f142c5-da60-41a5-8119-e6e805144b63", 00:19:40.565 "base_bdev": "nvme0n1", 00:19:40.565 "thin_provision": true, 00:19:40.565 "num_allocated_clusters": 0, 00:19:40.565 "snapshot": false, 00:19:40.565 "clone": false, 00:19:40.565 "esnap_clone": false 00:19:40.565 } 00:19:40.565 } 00:19:40.565 } 00:19:40.565 ]' 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:40.565 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:40.565 07:51:30 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:40.565 07:51:30 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:40.565 07:51:30 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:40.823 07:51:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:40.823 07:51:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:40.823 07:51:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.823 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:40.823 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:40.823 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:40.823 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:40.823 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:41.081 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:41.081 { 00:19:41.081 "name": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:41.081 "aliases": [ 00:19:41.081 "lvs/nvme0n1p0" 00:19:41.082 ], 00:19:41.082 "product_name": "Logical Volume", 00:19:41.082 "block_size": 4096, 00:19:41.082 "num_blocks": 26476544, 00:19:41.082 "uuid": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:41.082 "assigned_rate_limits": { 00:19:41.082 "rw_ios_per_sec": 0, 00:19:41.082 "rw_mbytes_per_sec": 0, 00:19:41.082 "r_mbytes_per_sec": 0, 00:19:41.082 "w_mbytes_per_sec": 0 00:19:41.082 }, 00:19:41.082 "claimed": false, 00:19:41.082 "zoned": false, 00:19:41.082 "supported_io_types": { 00:19:41.082 "read": true, 00:19:41.082 "write": true, 00:19:41.082 "unmap": true, 00:19:41.082 "flush": false, 00:19:41.082 "reset": true, 00:19:41.082 "nvme_admin": false, 00:19:41.082 "nvme_io": false, 00:19:41.082 "nvme_io_md": false, 00:19:41.082 "write_zeroes": true, 00:19:41.082 "zcopy": false, 00:19:41.082 "get_zone_info": false, 00:19:41.082 "zone_management": false, 00:19:41.082 "zone_append": false, 00:19:41.082 "compare": false, 00:19:41.082 "compare_and_write": false, 00:19:41.082 "abort": false, 00:19:41.082 "seek_hole": true, 00:19:41.082 "seek_data": true, 00:19:41.082 "copy": false, 00:19:41.082 "nvme_iov_md": false 00:19:41.082 }, 00:19:41.082 "driver_specific": { 00:19:41.082 "lvol": { 00:19:41.082 "lvol_store_uuid": "63f142c5-da60-41a5-8119-e6e805144b63", 00:19:41.082 "base_bdev": "nvme0n1", 00:19:41.082 "thin_provision": true, 00:19:41.082 "num_allocated_clusters": 0, 00:19:41.082 "snapshot": false, 00:19:41.082 "clone": false, 00:19:41.082 "esnap_clone": false 00:19:41.082 } 00:19:41.082 } 00:19:41.082 } 00:19:41.082 ]' 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:41.082 07:51:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:41.082 07:51:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:41.082 07:51:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:41.082 07:51:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:41.082 07:51:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:41.082 07:51:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:41.082 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:41.082 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:41.082 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:41.082 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:41.082 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d0c329b-0ad8-45db-b21c-eaf40dad1612 00:19:41.340 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:41.340 { 00:19:41.340 "name": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:41.340 "aliases": [ 00:19:41.340 "lvs/nvme0n1p0" 00:19:41.340 ], 00:19:41.340 "product_name": "Logical Volume", 00:19:41.340 "block_size": 4096, 00:19:41.340 "num_blocks": 26476544, 00:19:41.340 "uuid": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:41.340 "assigned_rate_limits": { 00:19:41.340 "rw_ios_per_sec": 0, 00:19:41.340 "rw_mbytes_per_sec": 0, 00:19:41.340 "r_mbytes_per_sec": 0, 00:19:41.340 "w_mbytes_per_sec": 0 00:19:41.340 }, 00:19:41.340 "claimed": false, 00:19:41.340 "zoned": false, 00:19:41.340 "supported_io_types": { 00:19:41.340 "read": true, 00:19:41.340 "write": true, 00:19:41.340 "unmap": true, 00:19:41.340 "flush": false, 00:19:41.340 "reset": true, 00:19:41.340 "nvme_admin": false, 00:19:41.340 "nvme_io": false, 00:19:41.341 "nvme_io_md": false, 00:19:41.341 "write_zeroes": true, 00:19:41.341 "zcopy": false, 00:19:41.341 "get_zone_info": false, 00:19:41.341 "zone_management": false, 00:19:41.341 "zone_append": false, 00:19:41.341 "compare": false, 00:19:41.341 "compare_and_write": false, 00:19:41.341 "abort": false, 00:19:41.341 "seek_hole": true, 00:19:41.341 "seek_data": true, 00:19:41.341 "copy": false, 00:19:41.341 "nvme_iov_md": false 00:19:41.341 }, 00:19:41.341 "driver_specific": { 00:19:41.341 "lvol": { 00:19:41.341 "lvol_store_uuid": "63f142c5-da60-41a5-8119-e6e805144b63", 00:19:41.341 "base_bdev": "nvme0n1", 00:19:41.341 "thin_provision": true, 00:19:41.341 "num_allocated_clusters": 0, 00:19:41.341 "snapshot": false, 00:19:41.341 "clone": false, 00:19:41.341 "esnap_clone": false 00:19:41.341 } 00:19:41.341 } 00:19:41.341 } 00:19:41.341 ]' 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:41.341 07:51:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:41.600 07:51:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:41.600 07:51:31 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0d0c329b-0ad8-45db-b21c-eaf40dad1612 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:41.600 [2024-11-29 07:51:31.471626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.471660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.600 [2024-11-29 07:51:31.471673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.600 [2024-11-29 07:51:31.471680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.473906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.473935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.600 [2024-11-29 07:51:31.473944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:19:41.600 [2024-11-29 07:51:31.473950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.474031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.600 [2024-11-29 07:51:31.474644] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.600 [2024-11-29 07:51:31.474704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.474711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.600 [2024-11-29 07:51:31.474719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:19:41.600 [2024-11-29 07:51:31.474725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.474822] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:19:41.600 [2024-11-29 07:51:31.475750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.475776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:41.600 [2024-11-29 07:51:31.475783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:41.600 [2024-11-29 07:51:31.475790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.480451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.480478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.600 [2024-11-29 07:51:31.480485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:19:41.600 [2024-11-29 07:51:31.480494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.480591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.480602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.600 [2024-11-29 07:51:31.480609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:41.600 [2024-11-29 07:51:31.480618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.480647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.480654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.600 [2024-11-29 07:51:31.480661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.600 [2024-11-29 07:51:31.480669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.480703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:41.600 [2024-11-29 07:51:31.483533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.483555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.600 [2024-11-29 07:51:31.483565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:19:41.600 [2024-11-29 07:51:31.483571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.483612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.483631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.600 [2024-11-29 07:51:31.483638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:41.600 [2024-11-29 07:51:31.483644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.483680] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:41.600 [2024-11-29 07:51:31.483783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:41.600 [2024-11-29 07:51:31.483795] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.600 [2024-11-29 07:51:31.483804] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:41.600 [2024-11-29 07:51:31.483813] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.600 [2024-11-29 07:51:31.483820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.600 [2024-11-29 07:51:31.483827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:41.600 [2024-11-29 07:51:31.483832] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.600 [2024-11-29 07:51:31.483840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:41.600 [2024-11-29 07:51:31.483846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:41.600 [2024-11-29 07:51:31.483854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.483860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.600 [2024-11-29 07:51:31.483867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:19:41.600 [2024-11-29 07:51:31.483872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.600 [2024-11-29 07:51:31.483960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.600 [2024-11-29 07:51:31.483967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.601 [2024-11-29 07:51:31.483975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:41.601 [2024-11-29 07:51:31.483980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.601 [2024-11-29 07:51:31.484096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.601 [2024-11-29 07:51:31.484104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.601 [2024-11-29 07:51:31.484112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.601 [2024-11-29 07:51:31.484130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.601 [2024-11-29 07:51:31.484150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.601 [2024-11-29 07:51:31.484162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.601 [2024-11-29 07:51:31.484168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:41.601 [2024-11-29 07:51:31.484174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.601 [2024-11-29 07:51:31.484180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.601 [2024-11-29 07:51:31.484186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:41.601 [2024-11-29 07:51:31.484191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.601 [2024-11-29 07:51:31.484205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.601 [2024-11-29 07:51:31.484225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.601 [2024-11-29 07:51:31.484243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.601 [2024-11-29 07:51:31.484260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.601 [2024-11-29 07:51:31.484277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.601 [2024-11-29 07:51:31.484295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.601 [2024-11-29 07:51:31.484306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.601 [2024-11-29 07:51:31.484311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:41.601 [2024-11-29 07:51:31.484317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.601 [2024-11-29 07:51:31.484322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:41.601 [2024-11-29 07:51:31.484328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:41.601 [2024-11-29 07:51:31.484333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:41.601 [2024-11-29 07:51:31.484344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:41.601 [2024-11-29 07:51:31.484350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.601 [2024-11-29 07:51:31.484363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.601 [2024-11-29 07:51:31.484369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.601 [2024-11-29 07:51:31.484381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.601 [2024-11-29 07:51:31.484390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.601 [2024-11-29 07:51:31.484395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.601 [2024-11-29 07:51:31.484402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.601 [2024-11-29 07:51:31.484406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.601 [2024-11-29 07:51:31.484413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.601 [2024-11-29 07:51:31.484420] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.601 [2024-11-29 07:51:31.484429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:41.601 [2024-11-29 07:51:31.484462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:41.601 [2024-11-29 07:51:31.484468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:41.601 [2024-11-29 07:51:31.484475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:41.601 [2024-11-29 07:51:31.484481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:41.601 [2024-11-29 07:51:31.484488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:41.601 [2024-11-29 07:51:31.484493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:41.601 [2024-11-29 07:51:31.484500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:41.601 [2024-11-29 07:51:31.484506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:41.601 [2024-11-29 07:51:31.484513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:41.601 [2024-11-29 07:51:31.484549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.601 [2024-11-29 07:51:31.484557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.601 [2024-11-29 07:51:31.484570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.602 [2024-11-29 07:51:31.484575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.602 [2024-11-29 07:51:31.484583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.602 [2024-11-29 07:51:31.484588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.602 [2024-11-29 07:51:31.484595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.602 [2024-11-29 07:51:31.484602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:41.602 [2024-11-29 07:51:31.484608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.602 [2024-11-29 07:51:31.484697] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:41.602 [2024-11-29 07:51:31.484708] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:44.895 [2024-11-29 07:51:34.272675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.272915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:44.895 [2024-11-29 07:51:34.272936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2787.966 ms 00:19:44.895 [2024-11-29 07:51:34.272947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.298505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.298546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.895 [2024-11-29 07:51:34.298558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.318 ms 00:19:44.895 [2024-11-29 07:51:34.298568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.298698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.298711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.895 [2024-11-29 07:51:34.298736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:44.895 [2024-11-29 07:51:34.298751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.348628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.348668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.895 [2024-11-29 07:51:34.348681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.845 ms 00:19:44.895 [2024-11-29 07:51:34.348692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.348776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.348789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.895 [2024-11-29 07:51:34.348798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.895 [2024-11-29 07:51:34.348807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.349159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.349176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.895 [2024-11-29 07:51:34.349184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:44.895 [2024-11-29 07:51:34.349194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.349321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.349332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.895 [2024-11-29 07:51:34.349355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:44.895 [2024-11-29 07:51:34.349366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.363856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.363887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.895 [2024-11-29 07:51:34.363898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.459 ms 00:19:44.895 [2024-11-29 07:51:34.363907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.375229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:44.895 [2024-11-29 07:51:34.389491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.389521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.895 [2024-11-29 07:51:34.389533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:19:44.895 [2024-11-29 07:51:34.389542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.463934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.463978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:44.895 [2024-11-29 07:51:34.463993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.317 ms 00:19:44.895 [2024-11-29 07:51:34.464002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.464227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.464239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.895 [2024-11-29 07:51:34.464252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:44.895 [2024-11-29 07:51:34.464260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.487514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.487546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:44.895 [2024-11-29 07:51:34.487559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.212 ms 00:19:44.895 [2024-11-29 07:51:34.487569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.510419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.510463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:44.895 [2024-11-29 07:51:34.510475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.785 ms 00:19:44.895 [2024-11-29 07:51:34.510484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.511061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.511085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.895 [2024-11-29 07:51:34.511095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:19:44.895 [2024-11-29 07:51:34.511103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.581084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.581244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:44.895 [2024-11-29 07:51:34.581267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.941 ms 00:19:44.895 [2024-11-29 07:51:34.581276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.605182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.605214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:44.895 [2024-11-29 07:51:34.605227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.803 ms 00:19:44.895 [2024-11-29 07:51:34.605235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.628309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.628425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:44.895 [2024-11-29 07:51:34.628459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 00:19:44.895 [2024-11-29 07:51:34.628467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.651729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.651771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.895 [2024-11-29 07:51:34.651783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.193 ms 00:19:44.895 [2024-11-29 07:51:34.651790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.651842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.651851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.895 [2024-11-29 07:51:34.651863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:44.895 [2024-11-29 07:51:34.651870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.651953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.895 [2024-11-29 07:51:34.651963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.895 [2024-11-29 07:51:34.651972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:44.895 [2024-11-29 07:51:34.651979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.895 [2024-11-29 07:51:34.652848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.895 [2024-11-29 07:51:34.655813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3180.897 ms, result 0 00:19:44.895 [2024-11-29 07:51:34.656690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.895 { 00:19:44.895 "name": "ftl0", 00:19:44.895 "uuid": "3615a4e7-f157-4413-91c6-683bd04f0ab0" 00:19:44.895 } 00:19:44.895 07:51:34 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:44.895 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:45.193 07:51:34 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:45.193 [ 00:19:45.193 { 00:19:45.193 "name": "ftl0", 00:19:45.193 "aliases": [ 00:19:45.193 "3615a4e7-f157-4413-91c6-683bd04f0ab0" 00:19:45.193 ], 00:19:45.193 "product_name": "FTL disk", 00:19:45.193 "block_size": 4096, 00:19:45.193 "num_blocks": 23592960, 00:19:45.194 "uuid": "3615a4e7-f157-4413-91c6-683bd04f0ab0", 00:19:45.194 "assigned_rate_limits": { 00:19:45.194 "rw_ios_per_sec": 0, 00:19:45.194 "rw_mbytes_per_sec": 0, 00:19:45.194 "r_mbytes_per_sec": 0, 00:19:45.194 "w_mbytes_per_sec": 0 00:19:45.194 }, 00:19:45.194 "claimed": false, 00:19:45.194 "zoned": false, 00:19:45.194 "supported_io_types": { 00:19:45.194 "read": true, 00:19:45.194 "write": true, 00:19:45.194 "unmap": true, 00:19:45.194 "flush": true, 00:19:45.194 "reset": false, 00:19:45.194 "nvme_admin": false, 00:19:45.194 "nvme_io": false, 00:19:45.194 "nvme_io_md": false, 00:19:45.194 "write_zeroes": true, 00:19:45.194 "zcopy": false, 00:19:45.194 "get_zone_info": false, 00:19:45.194 "zone_management": false, 00:19:45.194 "zone_append": false, 00:19:45.194 "compare": false, 00:19:45.194 "compare_and_write": false, 00:19:45.194 "abort": false, 00:19:45.194 "seek_hole": false, 00:19:45.194 "seek_data": false, 00:19:45.194 "copy": false, 00:19:45.194 "nvme_iov_md": false 00:19:45.194 }, 00:19:45.194 "driver_specific": { 00:19:45.194 "ftl": { 00:19:45.194 "base_bdev": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:45.194 "cache": "nvc0n1p0" 00:19:45.194 } 00:19:45.194 } 00:19:45.194 } 00:19:45.194 ] 00:19:45.194 07:51:35 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:45.194 07:51:35 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:45.194 07:51:35 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:45.476 07:51:35 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:45.476 07:51:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:45.736 07:51:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:45.736 { 00:19:45.736 "name": "ftl0", 00:19:45.736 "aliases": [ 00:19:45.736 "3615a4e7-f157-4413-91c6-683bd04f0ab0" 00:19:45.736 ], 00:19:45.736 "product_name": "FTL disk", 00:19:45.736 "block_size": 4096, 00:19:45.736 "num_blocks": 23592960, 00:19:45.736 "uuid": "3615a4e7-f157-4413-91c6-683bd04f0ab0", 00:19:45.736 "assigned_rate_limits": { 00:19:45.736 "rw_ios_per_sec": 0, 00:19:45.736 "rw_mbytes_per_sec": 0, 00:19:45.736 "r_mbytes_per_sec": 0, 00:19:45.736 "w_mbytes_per_sec": 0 00:19:45.736 }, 00:19:45.736 "claimed": false, 00:19:45.736 "zoned": false, 00:19:45.736 "supported_io_types": { 00:19:45.736 "read": true, 00:19:45.736 "write": true, 00:19:45.736 "unmap": true, 00:19:45.736 "flush": true, 00:19:45.736 "reset": false, 00:19:45.736 "nvme_admin": false, 00:19:45.736 "nvme_io": false, 00:19:45.736 "nvme_io_md": false, 00:19:45.736 "write_zeroes": true, 00:19:45.736 "zcopy": false, 00:19:45.736 "get_zone_info": false, 00:19:45.736 "zone_management": false, 00:19:45.736 "zone_append": false, 00:19:45.736 "compare": false, 00:19:45.736 "compare_and_write": false, 00:19:45.736 "abort": false, 00:19:45.736 "seek_hole": false, 00:19:45.736 "seek_data": false, 00:19:45.736 "copy": false, 00:19:45.736 "nvme_iov_md": false 00:19:45.736 }, 00:19:45.736 "driver_specific": { 00:19:45.736 "ftl": { 00:19:45.736 "base_bdev": "0d0c329b-0ad8-45db-b21c-eaf40dad1612", 00:19:45.736 "cache": "nvc0n1p0" 00:19:45.736 } 00:19:45.736 } 00:19:45.736 } 00:19:45.736 ]' 00:19:45.736 07:51:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:45.736 07:51:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:45.736 07:51:35 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:45.998 [2024-11-29 07:51:35.680997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.681041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:45.998 [2024-11-29 07:51:35.681055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:45.998 [2024-11-29 07:51:35.681065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.681107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:45.998 [2024-11-29 07:51:35.683726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.683753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:45.998 [2024-11-29 07:51:35.683767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:19:45.998 [2024-11-29 07:51:35.683776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.684401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.684420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:45.998 [2024-11-29 07:51:35.684430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:19:45.998 [2024-11-29 07:51:35.684438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.688091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.688108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.998 [2024-11-29 07:51:35.688118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:19:45.998 [2024-11-29 07:51:35.688125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.695048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.695165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:45.998 [2024-11-29 07:51:35.695185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:19:45.998 [2024-11-29 07:51:35.695192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.719303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.719335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.998 [2024-11-29 07:51:35.719349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.019 ms 00:19:45.998 [2024-11-29 07:51:35.719357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.734246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.734364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.998 [2024-11-29 07:51:35.734386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.826 ms 00:19:45.998 [2024-11-29 07:51:35.734394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.734645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.734657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.998 [2024-11-29 07:51:35.734667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:19:45.998 [2024-11-29 07:51:35.734675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.758102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.758132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:45.998 [2024-11-29 07:51:35.758143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:19:45.998 [2024-11-29 07:51:35.758150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.781291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.781406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:45.998 [2024-11-29 07:51:35.781426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.082 ms 00:19:45.998 [2024-11-29 07:51:35.781434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.804271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.804370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:45.998 [2024-11-29 07:51:35.804387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.761 ms 00:19:45.998 [2024-11-29 07:51:35.804394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.827425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.998 [2024-11-29 07:51:35.827536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:45.998 [2024-11-29 07:51:35.827553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.886 ms 00:19:45.998 [2024-11-29 07:51:35.827561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.998 [2024-11-29 07:51:35.827621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:45.998 [2024-11-29 07:51:35.827635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:45.998 [2024-11-29 07:51:35.827712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.827999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:45.999 [2024-11-29 07:51:35.828296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:46.000 [2024-11-29 07:51:35.828495] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:46.000 [2024-11-29 07:51:35.828506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:19:46.000 [2024-11-29 07:51:35.828514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:46.000 [2024-11-29 07:51:35.828523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:46.000 [2024-11-29 07:51:35.828532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:46.000 [2024-11-29 07:51:35.828541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:46.000 [2024-11-29 07:51:35.828547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:46.000 [2024-11-29 07:51:35.828555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:46.000 [2024-11-29 07:51:35.828562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:46.000 [2024-11-29 07:51:35.828570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:46.000 [2024-11-29 07:51:35.828577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:46.000 [2024-11-29 07:51:35.828586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.000 [2024-11-29 07:51:35.828593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:46.000 [2024-11-29 07:51:35.828603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:19:46.000 [2024-11-29 07:51:35.828610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.841121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.000 [2024-11-29 07:51:35.841150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:46.000 [2024-11-29 07:51:35.841163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.467 ms 00:19:46.000 [2024-11-29 07:51:35.841171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.841567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.000 [2024-11-29 07:51:35.841578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:46.000 [2024-11-29 07:51:35.841588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:19:46.000 [2024-11-29 07:51:35.841595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.885666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.000 [2024-11-29 07:51:35.885696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.000 [2024-11-29 07:51:35.885707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.000 [2024-11-29 07:51:35.885715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.885814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.000 [2024-11-29 07:51:35.885823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.000 [2024-11-29 07:51:35.885832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.000 [2024-11-29 07:51:35.885839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.885906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.000 [2024-11-29 07:51:35.885917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.000 [2024-11-29 07:51:35.885928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.000 [2024-11-29 07:51:35.885935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.000 [2024-11-29 07:51:35.885973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.000 [2024-11-29 07:51:35.885981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.000 [2024-11-29 07:51:35.885990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.000 [2024-11-29 07:51:35.885997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:35.967510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:35.967549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.262 [2024-11-29 07:51:35.967560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:35.967567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.030402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.030440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.262 [2024-11-29 07:51:36.030472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.030480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.030591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.030601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.262 [2024-11-29 07:51:36.030616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.030624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.030693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.030701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.262 [2024-11-29 07:51:36.030710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.030717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.030833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.030843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.262 [2024-11-29 07:51:36.030852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.030862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.030920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.030934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:46.262 [2024-11-29 07:51:36.030943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.030950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.031002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.031011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.262 [2024-11-29 07:51:36.031022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.031030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.031086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.262 [2024-11-29 07:51:36.031106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.262 [2024-11-29 07:51:36.031116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.262 [2024-11-29 07:51:36.031122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.262 [2024-11-29 07:51:36.031317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.315 ms, result 0 00:19:46.262 true 00:19:46.262 07:51:36 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76293 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76293 ']' 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76293 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76293 00:19:46.262 killing process with pid 76293 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76293' 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76293 00:19:46.262 07:51:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76293 00:19:52.841 07:51:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:53.782 65536+0 records in 00:19:53.782 65536+0 records out 00:19:53.782 268435456 bytes (268 MB, 256 MiB) copied, 1.12726 s, 238 MB/s 00:19:53.782 07:51:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.782 [2024-11-29 07:51:43.693202] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:53.782 [2024-11-29 07:51:43.693362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76487 ] 00:19:54.043 [2024-11-29 07:51:43.859562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.303 [2024-11-29 07:51:44.001627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.564 [2024-11-29 07:51:44.313753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.564 [2024-11-29 07:51:44.313988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.564 [2024-11-29 07:51:44.473699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.473745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.564 [2024-11-29 07:51:44.473761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.564 [2024-11-29 07:51:44.473771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.476600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.476784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.564 [2024-11-29 07:51:44.476803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:19:54.564 [2024-11-29 07:51:44.476811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.477262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.564 [2024-11-29 07:51:44.478082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.564 [2024-11-29 07:51:44.478118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.478129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.564 [2024-11-29 07:51:44.478139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:19:54.564 [2024-11-29 07:51:44.478147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.479828] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.564 [2024-11-29 07:51:44.493535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.493572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.564 [2024-11-29 07:51:44.493585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.709 ms 00:19:54.564 [2024-11-29 07:51:44.493594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.493692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.493705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.564 [2024-11-29 07:51:44.493715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:54.564 [2024-11-29 07:51:44.493723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.501436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.501482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.564 [2024-11-29 07:51:44.501492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.670 ms 00:19:54.564 [2024-11-29 07:51:44.501500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.501596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.501606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.564 [2024-11-29 07:51:44.501615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:54.564 [2024-11-29 07:51:44.501623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.501653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.501663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.564 [2024-11-29 07:51:44.501671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:54.564 [2024-11-29 07:51:44.501678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.501701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:54.564 [2024-11-29 07:51:44.505635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.505665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.564 [2024-11-29 07:51:44.505675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:19:54.564 [2024-11-29 07:51:44.505683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.505748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.564 [2024-11-29 07:51:44.505758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.564 [2024-11-29 07:51:44.505767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:54.564 [2024-11-29 07:51:44.505775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.564 [2024-11-29 07:51:44.505796] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.564 [2024-11-29 07:51:44.505817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:54.564 [2024-11-29 07:51:44.505854] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.564 [2024-11-29 07:51:44.505871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:54.564 [2024-11-29 07:51:44.505978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.565 [2024-11-29 07:51:44.505989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.565 [2024-11-29 07:51:44.506000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.565 [2024-11-29 07:51:44.506013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:54.565 [2024-11-29 07:51:44.506039] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.565 [2024-11-29 07:51:44.506046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.565 [2024-11-29 07:51:44.506054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.565 [2024-11-29 07:51:44.506062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.565 [2024-11-29 07:51:44.506070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.565 [2024-11-29 07:51:44.506078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:54.565 [2024-11-29 07:51:44.506086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.565 [2024-11-29 07:51:44.506173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.565 [2024-11-29 07:51:44.506185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.565 [2024-11-29 07:51:44.506193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.565 [2024-11-29 07:51:44.506200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.565 [2024-11-29 07:51:44.506302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.565 [2024-11-29 07:51:44.506312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.565 [2024-11-29 07:51:44.506321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.565 [2024-11-29 07:51:44.506344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.565 [2024-11-29 07:51:44.506369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.565 [2024-11-29 07:51:44.506384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.565 [2024-11-29 07:51:44.506399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:54.565 [2024-11-29 07:51:44.506406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.565 [2024-11-29 07:51:44.506413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.565 [2024-11-29 07:51:44.506421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:54.565 [2024-11-29 07:51:44.506427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.565 [2024-11-29 07:51:44.506454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.565 [2024-11-29 07:51:44.506477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.565 [2024-11-29 07:51:44.506498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.565 [2024-11-29 07:51:44.506518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.565 [2024-11-29 07:51:44.506540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.565 [2024-11-29 07:51:44.506553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.565 [2024-11-29 07:51:44.506560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:54.565 [2024-11-29 07:51:44.506567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.565 [2024-11-29 07:51:44.506574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.565 [2024-11-29 07:51:44.506601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:54.565 [2024-11-29 07:51:44.506609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.565 [2024-11-29 07:51:44.506616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.826 [2024-11-29 07:51:44.506622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:54.826 [2024-11-29 07:51:44.506633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.826 [2024-11-29 07:51:44.506641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.826 [2024-11-29 07:51:44.506647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:54.826 [2024-11-29 07:51:44.506657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.826 [2024-11-29 07:51:44.506664] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.826 [2024-11-29 07:51:44.506672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.826 [2024-11-29 07:51:44.506683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.826 [2024-11-29 07:51:44.506690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.826 [2024-11-29 07:51:44.506699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.826 [2024-11-29 07:51:44.506706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.826 [2024-11-29 07:51:44.506713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.826 [2024-11-29 07:51:44.506721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.826 [2024-11-29 07:51:44.506727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.826 [2024-11-29 07:51:44.506735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.826 [2024-11-29 07:51:44.506743] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.826 [2024-11-29 07:51:44.506752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.826 [2024-11-29 07:51:44.506761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:54.826 [2024-11-29 07:51:44.506768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:54.826 [2024-11-29 07:51:44.506775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:54.826 [2024-11-29 07:51:44.506781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:54.826 [2024-11-29 07:51:44.506789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:54.826 [2024-11-29 07:51:44.506796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:54.826 [2024-11-29 07:51:44.506803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:54.826 [2024-11-29 07:51:44.506811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:54.826 [2024-11-29 07:51:44.506818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:54.826 [2024-11-29 07:51:44.506825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:54.826 [2024-11-29 07:51:44.506832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:54.826 [2024-11-29 07:51:44.506839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:54.826 [2024-11-29 07:51:44.506846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:54.826 [2024-11-29 07:51:44.506853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:54.826 [2024-11-29 07:51:44.506861] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.827 [2024-11-29 07:51:44.506869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.827 [2024-11-29 07:51:44.506877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.827 [2024-11-29 07:51:44.506884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.827 [2024-11-29 07:51:44.506891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.827 [2024-11-29 07:51:44.506898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.827 [2024-11-29 07:51:44.506907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.506918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.827 [2024-11-29 07:51:44.506926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:54.827 [2024-11-29 07:51:44.506935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.539618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.539660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.827 [2024-11-29 07:51:44.539672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.629 ms 00:19:54.827 [2024-11-29 07:51:44.539680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.539817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.539829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.827 [2024-11-29 07:51:44.539838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:54.827 [2024-11-29 07:51:44.539846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.594196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.594253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.827 [2024-11-29 07:51:44.594271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.326 ms 00:19:54.827 [2024-11-29 07:51:44.594281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.594399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.594414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.827 [2024-11-29 07:51:44.594424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.827 [2024-11-29 07:51:44.594432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.595091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.595124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.827 [2024-11-29 07:51:44.595147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:19:54.827 [2024-11-29 07:51:44.595157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.595328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.595340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.827 [2024-11-29 07:51:44.595349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:54.827 [2024-11-29 07:51:44.595357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.614250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.614301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.827 [2024-11-29 07:51:44.614313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.869 ms 00:19:54.827 [2024-11-29 07:51:44.614323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.629872] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:54.827 [2024-11-29 07:51:44.629927] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.827 [2024-11-29 07:51:44.629942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.629952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.827 [2024-11-29 07:51:44.629964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.463 ms 00:19:54.827 [2024-11-29 07:51:44.629971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.656779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.656986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.827 [2024-11-29 07:51:44.657010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.699 ms 00:19:54.827 [2024-11-29 07:51:44.657020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.670399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.670466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.827 [2024-11-29 07:51:44.670480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.195 ms 00:19:54.827 [2024-11-29 07:51:44.670488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.683412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.683485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.827 [2024-11-29 07:51:44.683498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:19:54.827 [2024-11-29 07:51:44.683507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.684171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.684199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.827 [2024-11-29 07:51:44.684210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:19:54.827 [2024-11-29 07:51:44.684218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.755822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.827 [2024-11-29 07:51:44.755881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.827 [2024-11-29 07:51:44.755897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.576 ms 00:19:54.827 [2024-11-29 07:51:44.755907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.827 [2024-11-29 07:51:44.768363] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.087 [2024-11-29 07:51:44.792126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.792185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.087 [2024-11-29 07:51:44.792201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.115 ms 00:19:55.087 [2024-11-29 07:51:44.792211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.087 [2024-11-29 07:51:44.792330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.792342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.087 [2024-11-29 07:51:44.792352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:55.087 [2024-11-29 07:51:44.792361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.087 [2024-11-29 07:51:44.792429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.792440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.087 [2024-11-29 07:51:44.792490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:55.087 [2024-11-29 07:51:44.792499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.087 [2024-11-29 07:51:44.792541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.792556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.087 [2024-11-29 07:51:44.792565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:55.087 [2024-11-29 07:51:44.792574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.087 [2024-11-29 07:51:44.792617] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.087 [2024-11-29 07:51:44.792629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.792637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.087 [2024-11-29 07:51:44.792646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:55.087 [2024-11-29 07:51:44.792656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.087 [2024-11-29 07:51:44.819358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.087 [2024-11-29 07:51:44.819632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.087 [2024-11-29 07:51:44.819658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.676 ms 00:19:55.088 [2024-11-29 07:51:44.819669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.088 [2024-11-29 07:51:44.819807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.088 [2024-11-29 07:51:44.819820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.088 [2024-11-29 07:51:44.819831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:55.088 [2024-11-29 07:51:44.819840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.088 [2024-11-29 07:51:44.821572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.088 [2024-11-29 07:51:44.825128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.448 ms, result 0 00:19:55.088 [2024-11-29 07:51:44.826643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.088 [2024-11-29 07:51:44.840818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.028  [2024-11-29T07:51:46.912Z] Copying: 13/256 [MB] (13 MBps) [2024-11-29T07:51:47.856Z] Copying: 29/256 [MB] (15 MBps) [2024-11-29T07:51:49.259Z] Copying: 42/256 [MB] (12 MBps) [2024-11-29T07:51:50.205Z] Copying: 54/256 [MB] (12 MBps) [2024-11-29T07:51:51.150Z] Copying: 64/256 [MB] (10 MBps) [2024-11-29T07:51:51.876Z] Copying: 75924/262144 [kB] (10036 kBps) [2024-11-29T07:51:53.294Z] Copying: 87/256 [MB] (13 MBps) [2024-11-29T07:51:53.863Z] Copying: 97/256 [MB] (10 MBps) [2024-11-29T07:51:55.250Z] Copying: 123/256 [MB] (26 MBps) [2024-11-29T07:51:56.194Z] Copying: 140/256 [MB] (16 MBps) [2024-11-29T07:51:57.139Z] Copying: 154/256 [MB] (14 MBps) [2024-11-29T07:51:58.084Z] Copying: 169/256 [MB] (15 MBps) [2024-11-29T07:51:59.028Z] Copying: 180/256 [MB] (10 MBps) [2024-11-29T07:51:59.968Z] Copying: 190/256 [MB] (10 MBps) [2024-11-29T07:52:00.903Z] Copying: 213/256 [MB] (23 MBps) [2024-11-29T07:52:01.166Z] Copying: 253/256 [MB] (39 MBps) [2024-11-29T07:52:01.166Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-29 07:52:00.977327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.222 [2024-11-29 07:52:00.987879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:00.988106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.222 [2024-11-29 07:52:00.988132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:11.222 [2024-11-29 07:52:00.988152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:00.988187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:11.222 [2024-11-29 07:52:00.991238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:00.991422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.222 [2024-11-29 07:52:00.991457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:20:11.222 [2024-11-29 07:52:00.991466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:00.994573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:00.994624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.222 [2024-11-29 07:52:00.994635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:20:11.222 [2024-11-29 07:52:00.994643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.003228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.003291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.222 [2024-11-29 07:52:01.003303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:20:11.222 [2024-11-29 07:52:01.003311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.010313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.010528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:11.222 [2024-11-29 07:52:01.010553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:20:11.222 [2024-11-29 07:52:01.010561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.036653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.036707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.222 [2024-11-29 07:52:01.036720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.999 ms 00:20:11.222 [2024-11-29 07:52:01.036728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.053929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.054138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.222 [2024-11-29 07:52:01.054169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:20:11.222 [2024-11-29 07:52:01.054178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.054400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.054432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.222 [2024-11-29 07:52:01.054474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:11.222 [2024-11-29 07:52:01.054497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.081067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.081254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:11.222 [2024-11-29 07:52:01.081274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.552 ms 00:20:11.222 [2024-11-29 07:52:01.081282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.107613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.107804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:11.222 [2024-11-29 07:52:01.107824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.203 ms 00:20:11.222 [2024-11-29 07:52:01.107831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.133454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.133506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.222 [2024-11-29 07:52:01.133518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.485 ms 00:20:11.222 [2024-11-29 07:52:01.133525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.159154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.222 [2024-11-29 07:52:01.159339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.222 [2024-11-29 07:52:01.159360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.543 ms 00:20:11.222 [2024-11-29 07:52:01.159368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.222 [2024-11-29 07:52:01.159424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.222 [2024-11-29 07:52:01.159440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.222 [2024-11-29 07:52:01.159830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.159998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.223 [2024-11-29 07:52:01.160256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.223 [2024-11-29 07:52:01.160265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:20:11.223 [2024-11-29 07:52:01.160274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.223 [2024-11-29 07:52:01.160281] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.223 [2024-11-29 07:52:01.160288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.223 [2024-11-29 07:52:01.160296] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.223 [2024-11-29 07:52:01.160303] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.223 [2024-11-29 07:52:01.160313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.223 [2024-11-29 07:52:01.160320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.223 [2024-11-29 07:52:01.160327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.223 [2024-11-29 07:52:01.160333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.223 [2024-11-29 07:52:01.160341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.223 [2024-11-29 07:52:01.160352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.223 [2024-11-29 07:52:01.160361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:20:11.223 [2024-11-29 07:52:01.160368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.484 [2024-11-29 07:52:01.174141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.485 [2024-11-29 07:52:01.174318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.485 [2024-11-29 07:52:01.174336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.752 ms 00:20:11.485 [2024-11-29 07:52:01.174344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.174795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.485 [2024-11-29 07:52:01.174815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.485 [2024-11-29 07:52:01.174825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:11.485 [2024-11-29 07:52:01.174833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.214170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.214228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.485 [2024-11-29 07:52:01.214241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.214249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.214374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.214385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.485 [2024-11-29 07:52:01.214394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.214402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.214483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.214494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.485 [2024-11-29 07:52:01.214503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.214511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.214530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.214542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.485 [2024-11-29 07:52:01.214551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.214559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.299536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.299598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.485 [2024-11-29 07:52:01.299610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.299619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.369640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.369700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.485 [2024-11-29 07:52:01.369711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.369720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.369808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.369818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.485 [2024-11-29 07:52:01.369828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.369837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.369872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.369882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.485 [2024-11-29 07:52:01.369894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.369902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.370006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.370017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.485 [2024-11-29 07:52:01.370027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.370035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.370069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.370080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.485 [2024-11-29 07:52:01.370088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.370100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.370145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.370155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.485 [2024-11-29 07:52:01.370165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.370173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.370225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.485 [2024-11-29 07:52:01.370236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.485 [2024-11-29 07:52:01.370247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.485 [2024-11-29 07:52:01.370256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.485 [2024-11-29 07:52:01.370419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.528 ms, result 0 00:20:12.871 00:20:12.871 00:20:12.871 07:52:02 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76683 00:20:12.871 07:52:02 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76683 00:20:12.871 07:52:02 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76683 ']' 00:20:12.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.871 07:52:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:12.871 [2024-11-29 07:52:02.543205] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:12.871 [2024-11-29 07:52:02.543361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76683 ] 00:20:12.871 [2024-11-29 07:52:02.699230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.132 [2024-11-29 07:52:02.822704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.705 07:52:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.705 07:52:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:13.705 07:52:03 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.966 [2024-11-29 07:52:03.728375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.966 [2024-11-29 07:52:03.728488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.966 [2024-11-29 07:52:03.884619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.967 [2024-11-29 07:52:03.884686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.967 [2024-11-29 07:52:03.884704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.967 [2024-11-29 07:52:03.884713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.967 [2024-11-29 07:52:03.887891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.967 [2024-11-29 07:52:03.887949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.967 [2024-11-29 07:52:03.887962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:20:13.967 [2024-11-29 07:52:03.887970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.967 [2024-11-29 07:52:03.888112] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.967 [2024-11-29 07:52:03.889047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.967 [2024-11-29 07:52:03.889106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.967 [2024-11-29 07:52:03.889115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.967 [2024-11-29 07:52:03.889127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:20:13.967 [2024-11-29 07:52:03.889137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.967 [2024-11-29 07:52:03.891131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.967 [2024-11-29 07:52:03.905686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.967 [2024-11-29 07:52:03.905751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.967 [2024-11-29 07:52:03.905766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:20:13.967 [2024-11-29 07:52:03.905776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.967 [2024-11-29 07:52:03.905902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.967 [2024-11-29 07:52:03.905917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.967 [2024-11-29 07:52:03.905932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:13.967 [2024-11-29 07:52:03.905943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.914628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.914683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.230 [2024-11-29 07:52:03.914694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.623 ms 00:20:14.230 [2024-11-29 07:52:03.914704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.914826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.914839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.230 [2024-11-29 07:52:03.914849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:14.230 [2024-11-29 07:52:03.914863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.914889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.914899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.230 [2024-11-29 07:52:03.914907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.230 [2024-11-29 07:52:03.914916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.914942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.230 [2024-11-29 07:52:03.919216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.919259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.230 [2024-11-29 07:52:03.919273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.278 ms 00:20:14.230 [2024-11-29 07:52:03.919280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.919362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.919372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.230 [2024-11-29 07:52:03.919388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:14.230 [2024-11-29 07:52:03.919396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.919419] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.230 [2024-11-29 07:52:03.919439] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.230 [2024-11-29 07:52:03.919507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.230 [2024-11-29 07:52:03.919524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:14.230 [2024-11-29 07:52:03.919632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.230 [2024-11-29 07:52:03.919644] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.230 [2024-11-29 07:52:03.919663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.230 [2024-11-29 07:52:03.919674] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.230 [2024-11-29 07:52:03.919686] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.230 [2024-11-29 07:52:03.919694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.230 [2024-11-29 07:52:03.919703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.230 [2024-11-29 07:52:03.919710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.230 [2024-11-29 07:52:03.919722] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.230 [2024-11-29 07:52:03.919730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.919740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.230 [2024-11-29 07:52:03.919748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:14.230 [2024-11-29 07:52:03.919760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.919847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.230 [2024-11-29 07:52:03.919859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.230 [2024-11-29 07:52:03.919867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:14.230 [2024-11-29 07:52:03.919877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.230 [2024-11-29 07:52:03.919979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.230 [2024-11-29 07:52:03.919991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.230 [2024-11-29 07:52:03.920000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.230 [2024-11-29 07:52:03.920028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.230 [2024-11-29 07:52:03.920055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.230 [2024-11-29 07:52:03.920071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.230 [2024-11-29 07:52:03.920079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.230 [2024-11-29 07:52:03.920086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.230 [2024-11-29 07:52:03.920094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.230 [2024-11-29 07:52:03.920102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:14.230 [2024-11-29 07:52:03.920111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.230 [2024-11-29 07:52:03.920128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.230 [2024-11-29 07:52:03.920157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.230 [2024-11-29 07:52:03.920183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.230 [2024-11-29 07:52:03.920206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.230 [2024-11-29 07:52:03.920221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.230 [2024-11-29 07:52:03.920231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:14.230 [2024-11-29 07:52:03.920237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.231 [2024-11-29 07:52:03.920246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.231 [2024-11-29 07:52:03.920252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:14.231 [2024-11-29 07:52:03.920262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.231 [2024-11-29 07:52:03.920269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.231 [2024-11-29 07:52:03.920278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:14.231 [2024-11-29 07:52:03.920284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.231 [2024-11-29 07:52:03.920293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.231 [2024-11-29 07:52:03.920300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:14.231 [2024-11-29 07:52:03.920311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.231 [2024-11-29 07:52:03.920317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.231 [2024-11-29 07:52:03.920326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:14.231 [2024-11-29 07:52:03.920333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.231 [2024-11-29 07:52:03.920341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.231 [2024-11-29 07:52:03.920352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.231 [2024-11-29 07:52:03.920362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.231 [2024-11-29 07:52:03.920369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.231 [2024-11-29 07:52:03.920379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.231 [2024-11-29 07:52:03.920388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.231 [2024-11-29 07:52:03.920397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.231 [2024-11-29 07:52:03.920405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.231 [2024-11-29 07:52:03.920413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.231 [2024-11-29 07:52:03.920420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.231 [2024-11-29 07:52:03.920430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.231 [2024-11-29 07:52:03.920455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.231 [2024-11-29 07:52:03.920476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:14.231 [2024-11-29 07:52:03.920486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.231 [2024-11-29 07:52:03.920494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:14.231 [2024-11-29 07:52:03.920503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:14.231 [2024-11-29 07:52:03.920511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:14.231 [2024-11-29 07:52:03.920520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:14.231 [2024-11-29 07:52:03.920528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:14.231 [2024-11-29 07:52:03.920536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:14.231 [2024-11-29 07:52:03.920544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:14.231 [2024-11-29 07:52:03.920587] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.231 [2024-11-29 07:52:03.920597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.231 [2024-11-29 07:52:03.920617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.231 [2024-11-29 07:52:03.920627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.231 [2024-11-29 07:52:03.920634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.231 [2024-11-29 07:52:03.920644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.920654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.231 [2024-11-29 07:52:03.920664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:20:14.231 [2024-11-29 07:52:03.920673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.953339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.953608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.231 [2024-11-29 07:52:03.953635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.605 ms 00:20:14.231 [2024-11-29 07:52:03.953646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.953790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.953802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.231 [2024-11-29 07:52:03.953813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:14.231 [2024-11-29 07:52:03.953820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.989334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.989557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.231 [2024-11-29 07:52:03.989584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.487 ms 00:20:14.231 [2024-11-29 07:52:03.989592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.989695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.989707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.231 [2024-11-29 07:52:03.989719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.231 [2024-11-29 07:52:03.989727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.990260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.990297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.231 [2024-11-29 07:52:03.990309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:20:14.231 [2024-11-29 07:52:03.990317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:03.990493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:03.990510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.231 [2024-11-29 07:52:03.990522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:14.231 [2024-11-29 07:52:03.990530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.008755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.008951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.231 [2024-11-29 07:52:04.008976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.196 ms 00:20:14.231 [2024-11-29 07:52:04.008985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.033609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:14.231 [2024-11-29 07:52:04.033670] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.231 [2024-11-29 07:52:04.033692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.033702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.231 [2024-11-29 07:52:04.033715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.582 ms 00:20:14.231 [2024-11-29 07:52:04.033730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.060284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.060355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.231 [2024-11-29 07:52:04.060372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.440 ms 00:20:14.231 [2024-11-29 07:52:04.060381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.074134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.074184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.231 [2024-11-29 07:52:04.074203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:20:14.231 [2024-11-29 07:52:04.074211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.087367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.087413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.231 [2024-11-29 07:52:04.087428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:20:14.231 [2024-11-29 07:52:04.087435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.088137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.088171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.231 [2024-11-29 07:52:04.088185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:14.231 [2024-11-29 07:52:04.088193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.231 [2024-11-29 07:52:04.158187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.231 [2024-11-29 07:52:04.158268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.232 [2024-11-29 07:52:04.158289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.961 ms 00:20:14.232 [2024-11-29 07:52:04.158299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.232 [2024-11-29 07:52:04.169806] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.493 [2024-11-29 07:52:04.189632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.189697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.493 [2024-11-29 07:52:04.189710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:20:14.493 [2024-11-29 07:52:04.189720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.189821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.189836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.493 [2024-11-29 07:52:04.189845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:14.493 [2024-11-29 07:52:04.189856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.189914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.189926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.493 [2024-11-29 07:52:04.189935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:14.493 [2024-11-29 07:52:04.189947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.189973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.189984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.493 [2024-11-29 07:52:04.189992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.493 [2024-11-29 07:52:04.190004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.190040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.493 [2024-11-29 07:52:04.190055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.190067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.493 [2024-11-29 07:52:04.190078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:14.493 [2024-11-29 07:52:04.190088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.217034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.217231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.493 [2024-11-29 07:52:04.217260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.915 ms 00:20:14.493 [2024-11-29 07:52:04.217269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.217402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.493 [2024-11-29 07:52:04.217414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.493 [2024-11-29 07:52:04.217429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:14.493 [2024-11-29 07:52:04.217438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.493 [2024-11-29 07:52:04.218809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.493 [2024-11-29 07:52:04.222406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.824 ms, result 0 00:20:14.494 [2024-11-29 07:52:04.225018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.494 Some configs were skipped because the RPC state that can call them passed over. 00:20:14.494 07:52:04 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:14.755 [2024-11-29 07:52:04.462129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.755 [2024-11-29 07:52:04.462208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.755 [2024-11-29 07:52:04.462224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:20:14.755 [2024-11-29 07:52:04.462236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.755 [2024-11-29 07:52:04.462273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.482 ms, result 0 00:20:14.755 true 00:20:14.755 07:52:04 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:14.755 [2024-11-29 07:52:04.681710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.755 [2024-11-29 07:52:04.681922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.755 [2024-11-29 07:52:04.681950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:20:14.755 [2024-11-29 07:52:04.681959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.755 [2024-11-29 07:52:04.682010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.988 ms, result 0 00:20:14.755 true 00:20:15.016 07:52:04 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76683 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76683 ']' 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76683 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76683 00:20:15.016 killing process with pid 76683 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76683' 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76683 00:20:15.016 07:52:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76683 00:20:15.588 [2024-11-29 07:52:05.336727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-11-29 07:52:05.336776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.588 [2024-11-29 07:52:05.336787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.588 [2024-11-29 07:52:05.336794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-11-29 07:52:05.336813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.588 [2024-11-29 07:52:05.338941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-11-29 07:52:05.338966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.588 [2024-11-29 07:52:05.338977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:20:15.588 [2024-11-29 07:52:05.338984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-11-29 07:52:05.339220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-11-29 07:52:05.339227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.588 [2024-11-29 07:52:05.339236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:15.588 [2024-11-29 07:52:05.339241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-11-29 07:52:05.342328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-11-29 07:52:05.342354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.588 [2024-11-29 07:52:05.342362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:20:15.588 [2024-11-29 07:52:05.342368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.588 [2024-11-29 07:52:05.347652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.588 [2024-11-29 07:52:05.347783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.588 [2024-11-29 07:52:05.347799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:20:15.588 [2024-11-29 07:52:05.347805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.355141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.355249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.589 [2024-11-29 07:52:05.355264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.277 ms 00:20:15.589 [2024-11-29 07:52:05.355271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.361696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.361799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.589 [2024-11-29 07:52:05.361814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.394 ms 00:20:15.589 [2024-11-29 07:52:05.361820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.361928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.361936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.589 [2024-11-29 07:52:05.361944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:15.589 [2024-11-29 07:52:05.361950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.369826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.369851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.589 [2024-11-29 07:52:05.369860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:20:15.589 [2024-11-29 07:52:05.369866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.377341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.377516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.589 [2024-11-29 07:52:05.377533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.444 ms 00:20:15.589 [2024-11-29 07:52:05.377538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.384530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.384624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.589 [2024-11-29 07:52:05.384638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.950 ms 00:20:15.589 [2024-11-29 07:52:05.384644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.391707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.589 [2024-11-29 07:52:05.391798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.589 [2024-11-29 07:52:05.391812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:20:15.589 [2024-11-29 07:52:05.391817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.589 [2024-11-29 07:52:05.391852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.589 [2024-11-29 07:52:05.391863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.391994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.589 [2024-11-29 07:52:05.392253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.590 [2024-11-29 07:52:05.392518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.590 [2024-11-29 07:52:05.392533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:20:15.590 [2024-11-29 07:52:05.392541] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.590 [2024-11-29 07:52:05.392548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.590 [2024-11-29 07:52:05.392554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.590 [2024-11-29 07:52:05.392561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.590 [2024-11-29 07:52:05.392566] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.590 [2024-11-29 07:52:05.392573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.590 [2024-11-29 07:52:05.392579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.590 [2024-11-29 07:52:05.392585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.590 [2024-11-29 07:52:05.392590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.590 [2024-11-29 07:52:05.392596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.590 [2024-11-29 07:52:05.392602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.590 [2024-11-29 07:52:05.392609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:20:15.590 [2024-11-29 07:52:05.392616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.402323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.590 [2024-11-29 07:52:05.402348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.590 [2024-11-29 07:52:05.402359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.689 ms 00:20:15.590 [2024-11-29 07:52:05.402365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.402678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.590 [2024-11-29 07:52:05.402692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.590 [2024-11-29 07:52:05.402702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:15.590 [2024-11-29 07:52:05.402708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.437933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.590 [2024-11-29 07:52:05.438028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.590 [2024-11-29 07:52:05.438042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.590 [2024-11-29 07:52:05.438048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.438121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.590 [2024-11-29 07:52:05.438129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.590 [2024-11-29 07:52:05.438138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.590 [2024-11-29 07:52:05.438144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.438179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.590 [2024-11-29 07:52:05.438186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.590 [2024-11-29 07:52:05.438195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.590 [2024-11-29 07:52:05.438201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.438215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.590 [2024-11-29 07:52:05.438221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.590 [2024-11-29 07:52:05.438228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.590 [2024-11-29 07:52:05.438234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.590 [2024-11-29 07:52:05.498727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.590 [2024-11-29 07:52:05.498760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.590 [2024-11-29 07:52:05.498770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.590 [2024-11-29 07:52:05.498777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-11-29 07:52:05.548349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.849 [2024-11-29 07:52:05.548495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.849 [2024-11-29 07:52:05.548514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.849 [2024-11-29 07:52:05.548521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-11-29 07:52:05.548581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.849 [2024-11-29 07:52:05.548589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.849 [2024-11-29 07:52:05.548599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.849 [2024-11-29 07:52:05.548604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-11-29 07:52:05.548628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.849 [2024-11-29 07:52:05.548635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.849 [2024-11-29 07:52:05.548643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.849 [2024-11-29 07:52:05.548649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-11-29 07:52:05.548724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.849 [2024-11-29 07:52:05.548732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.849 [2024-11-29 07:52:05.548739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.849 [2024-11-29 07:52:05.548745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.849 [2024-11-29 07:52:05.548771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.850 [2024-11-29 07:52:05.548777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.850 [2024-11-29 07:52:05.548785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.850 [2024-11-29 07:52:05.548791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.850 [2024-11-29 07:52:05.548823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.850 [2024-11-29 07:52:05.548829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.850 [2024-11-29 07:52:05.548838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.850 [2024-11-29 07:52:05.548844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.850 [2024-11-29 07:52:05.548887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.850 [2024-11-29 07:52:05.548895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.850 [2024-11-29 07:52:05.548902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.850 [2024-11-29 07:52:05.548908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.850 [2024-11-29 07:52:05.549013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.270 ms, result 0 00:20:16.417 07:52:06 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.417 07:52:06 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.417 [2024-11-29 07:52:06.143982] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:16.417 [2024-11-29 07:52:06.144110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76730 ] 00:20:16.417 [2024-11-29 07:52:06.299660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.676 [2024-11-29 07:52:06.385959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.676 [2024-11-29 07:52:06.595726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.676 [2024-11-29 07:52:06.595779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.938 [2024-11-29 07:52:06.753232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.753426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.938 [2024-11-29 07:52:06.753465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.938 [2024-11-29 07:52:06.753475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.756194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.756234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.938 [2024-11-29 07:52:06.756243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:20:16.938 [2024-11-29 07:52:06.756251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.756347] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.938 [2024-11-29 07:52:06.757151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.938 [2024-11-29 07:52:06.757258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.757658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.938 [2024-11-29 07:52:06.757684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:20:16.938 [2024-11-29 07:52:06.757693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.759277] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.938 [2024-11-29 07:52:06.772488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.772530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.938 [2024-11-29 07:52:06.772544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:20:16.938 [2024-11-29 07:52:06.772552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.772653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.772666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.938 [2024-11-29 07:52:06.772675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:16.938 [2024-11-29 07:52:06.772683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.779183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.779216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.938 [2024-11-29 07:52:06.779226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.457 ms 00:20:16.938 [2024-11-29 07:52:06.779233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.779329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.779343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.938 [2024-11-29 07:52:06.779351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:16.938 [2024-11-29 07:52:06.779359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.779386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.779395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.938 [2024-11-29 07:52:06.779403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:16.938 [2024-11-29 07:52:06.779410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.779432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.938 [2024-11-29 07:52:06.783248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.783407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.938 [2024-11-29 07:52:06.783425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.823 ms 00:20:16.938 [2024-11-29 07:52:06.783433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.783520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.783531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.938 [2024-11-29 07:52:06.783540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:16.938 [2024-11-29 07:52:06.783547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.938 [2024-11-29 07:52:06.783570] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.938 [2024-11-29 07:52:06.783589] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.938 [2024-11-29 07:52:06.783625] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.938 [2024-11-29 07:52:06.783641] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.938 [2024-11-29 07:52:06.783745] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.938 [2024-11-29 07:52:06.783756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.938 [2024-11-29 07:52:06.783767] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.938 [2024-11-29 07:52:06.783780] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.938 [2024-11-29 07:52:06.783788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.938 [2024-11-29 07:52:06.783797] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.938 [2024-11-29 07:52:06.783805] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.938 [2024-11-29 07:52:06.783813] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.938 [2024-11-29 07:52:06.783820] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.938 [2024-11-29 07:52:06.783828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.938 [2024-11-29 07:52:06.783836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.939 [2024-11-29 07:52:06.783844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:16.939 [2024-11-29 07:52:06.783851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.939 [2024-11-29 07:52:06.783938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.939 [2024-11-29 07:52:06.783949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.939 [2024-11-29 07:52:06.783956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:16.939 [2024-11-29 07:52:06.783964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.939 [2024-11-29 07:52:06.784066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.939 [2024-11-29 07:52:06.784076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.939 [2024-11-29 07:52:06.784085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.939 [2024-11-29 07:52:06.784107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.939 [2024-11-29 07:52:06.784129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.939 [2024-11-29 07:52:06.784143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.939 [2024-11-29 07:52:06.784157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.939 [2024-11-29 07:52:06.784163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.939 [2024-11-29 07:52:06.784171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.939 [2024-11-29 07:52:06.784178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.939 [2024-11-29 07:52:06.784185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.939 [2024-11-29 07:52:06.784199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.939 [2024-11-29 07:52:06.784220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.939 [2024-11-29 07:52:06.784240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.939 [2024-11-29 07:52:06.784260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.939 [2024-11-29 07:52:06.784279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.939 [2024-11-29 07:52:06.784298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.939 [2024-11-29 07:52:06.784311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.939 [2024-11-29 07:52:06.784317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.939 [2024-11-29 07:52:06.784324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.939 [2024-11-29 07:52:06.784331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.939 [2024-11-29 07:52:06.784337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.939 [2024-11-29 07:52:06.784343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.939 [2024-11-29 07:52:06.784356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.939 [2024-11-29 07:52:06.784363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.939 [2024-11-29 07:52:06.784378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.939 [2024-11-29 07:52:06.784390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.939 [2024-11-29 07:52:06.784406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.939 [2024-11-29 07:52:06.784413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.939 [2024-11-29 07:52:06.784419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.939 [2024-11-29 07:52:06.784426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.939 [2024-11-29 07:52:06.784432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.939 [2024-11-29 07:52:06.784439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.939 [2024-11-29 07:52:06.784463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.939 [2024-11-29 07:52:06.784473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.939 [2024-11-29 07:52:06.784489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.939 [2024-11-29 07:52:06.784497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.939 [2024-11-29 07:52:06.784504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.939 [2024-11-29 07:52:06.784512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.939 [2024-11-29 07:52:06.784520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.939 [2024-11-29 07:52:06.784528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.939 [2024-11-29 07:52:06.784535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.939 [2024-11-29 07:52:06.784542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.939 [2024-11-29 07:52:06.784549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.939 [2024-11-29 07:52:06.784585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.939 [2024-11-29 07:52:06.784593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.939 [2024-11-29 07:52:06.784610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.939 [2024-11-29 07:52:06.784618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.939 [2024-11-29 07:52:06.784625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.939 [2024-11-29 07:52:06.784632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.939 [2024-11-29 07:52:06.784643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.939 [2024-11-29 07:52:06.784652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:20:16.939 [2024-11-29 07:52:06.784659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.939 [2024-11-29 07:52:06.815570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.939 [2024-11-29 07:52:06.815733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.939 [2024-11-29 07:52:06.815798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.850 ms 00:20:16.939 [2024-11-29 07:52:06.815823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.939 [2024-11-29 07:52:06.815973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.939 [2024-11-29 07:52:06.816000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.939 [2024-11-29 07:52:06.816020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:16.939 [2024-11-29 07:52:06.816041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.939 [2024-11-29 07:52:06.867937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.939 [2024-11-29 07:52:06.868116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.939 [2024-11-29 07:52:06.868189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.795 ms 00:20:16.939 [2024-11-29 07:52:06.868213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.940 [2024-11-29 07:52:06.868337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.940 [2024-11-29 07:52:06.868367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.940 [2024-11-29 07:52:06.868389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.940 [2024-11-29 07:52:06.868408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.940 [2024-11-29 07:52:06.868965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.940 [2024-11-29 07:52:06.869019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.940 [2024-11-29 07:52:06.869051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:20:16.940 [2024-11-29 07:52:06.869070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.940 [2024-11-29 07:52:06.869240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.940 [2024-11-29 07:52:06.869337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.940 [2024-11-29 07:52:06.869362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:16.940 [2024-11-29 07:52:06.869382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.885780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.885943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.202 [2024-11-29 07:52:06.885996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.358 ms 00:20:17.202 [2024-11-29 07:52:06.886019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.900330] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.202 [2024-11-29 07:52:06.900527] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.202 [2024-11-29 07:52:06.900595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.900616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.202 [2024-11-29 07:52:06.900637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.442 ms 00:20:17.202 [2024-11-29 07:52:06.900665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.925866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.926019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.202 [2024-11-29 07:52:06.926078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.094 ms 00:20:17.202 [2024-11-29 07:52:06.926103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.938807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.938961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.202 [2024-11-29 07:52:06.939016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.533 ms 00:20:17.202 [2024-11-29 07:52:06.939037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.951622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.951769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.202 [2024-11-29 07:52:06.951825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.422 ms 00:20:17.202 [2024-11-29 07:52:06.951847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:06.952650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:06.952787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.202 [2024-11-29 07:52:06.952841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:20:17.202 [2024-11-29 07:52:06.952863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.018280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.018488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.202 [2024-11-29 07:52:07.018553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.360 ms 00:20:17.202 [2024-11-29 07:52:07.018578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.029347] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.202 [2024-11-29 07:52:07.047993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.048162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.202 [2024-11-29 07:52:07.048218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.307 ms 00:20:17.202 [2024-11-29 07:52:07.048249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.048362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.048391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.202 [2024-11-29 07:52:07.048413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:17.202 [2024-11-29 07:52:07.048433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.048536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.048665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.202 [2024-11-29 07:52:07.048687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:17.202 [2024-11-29 07:52:07.048715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.048764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.048786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.202 [2024-11-29 07:52:07.048907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:17.202 [2024-11-29 07:52:07.048934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.048991] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.202 [2024-11-29 07:52:07.049016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.049036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.202 [2024-11-29 07:52:07.049057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:17.202 [2024-11-29 07:52:07.049075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.074089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.074242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.202 [2024-11-29 07:52:07.074301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:20:17.202 [2024-11-29 07:52:07.074313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.074767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.202 [2024-11-29 07:52:07.074802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.202 [2024-11-29 07:52:07.074815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:17.202 [2024-11-29 07:52:07.074825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.202 [2024-11-29 07:52:07.075957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.202 [2024-11-29 07:52:07.079437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.387 ms, result 0 00:20:17.202 [2024-11-29 07:52:07.080815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.202 [2024-11-29 07:52:07.094194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.590  [2024-11-29T07:52:09.107Z] Copying: 24/256 [MB] (24 MBps) [2024-11-29T07:52:10.493Z] Copying: 49/256 [MB] (25 MBps) [2024-11-29T07:52:11.435Z] Copying: 61/256 [MB] (11 MBps) [2024-11-29T07:52:12.380Z] Copying: 86/256 [MB] (24 MBps) [2024-11-29T07:52:13.320Z] Copying: 105/256 [MB] (19 MBps) [2024-11-29T07:52:14.261Z] Copying: 131/256 [MB] (25 MBps) [2024-11-29T07:52:15.204Z] Copying: 151/256 [MB] (20 MBps) [2024-11-29T07:52:16.148Z] Copying: 171/256 [MB] (19 MBps) [2024-11-29T07:52:17.534Z] Copying: 185/256 [MB] (14 MBps) [2024-11-29T07:52:18.106Z] Copying: 204/256 [MB] (19 MBps) [2024-11-29T07:52:19.491Z] Copying: 222/256 [MB] (17 MBps) [2024-11-29T07:52:19.751Z] Copying: 245/256 [MB] (22 MBps) [2024-11-29T07:52:19.751Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-29 07:52:19.647633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.807 [2024-11-29 07:52:19.658002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.658053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.807 [2024-11-29 07:52:19.658076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.807 [2024-11-29 07:52:19.658085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.658111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:29.807 [2024-11-29 07:52:19.661087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.661129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.807 [2024-11-29 07:52:19.661142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:20:29.807 [2024-11-29 07:52:19.661150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.661418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.661429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.807 [2024-11-29 07:52:19.661437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:29.807 [2024-11-29 07:52:19.661464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.665177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.665341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.807 [2024-11-29 07:52:19.665360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:20:29.807 [2024-11-29 07:52:19.665368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.672438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.672585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.807 [2024-11-29 07:52:19.672648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.042 ms 00:20:29.807 [2024-11-29 07:52:19.672672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.698023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.698187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.807 [2024-11-29 07:52:19.698252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.268 ms 00:20:29.807 [2024-11-29 07:52:19.698275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.714768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.714928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.807 [2024-11-29 07:52:19.715062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.439 ms 00:20:29.807 [2024-11-29 07:52:19.715087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.715242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.715416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.807 [2024-11-29 07:52:19.715478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:29.807 [2024-11-29 07:52:19.715501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.807 [2024-11-29 07:52:19.741075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.807 [2024-11-29 07:52:19.741230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.807 [2024-11-29 07:52:19.741289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:20:29.808 [2024-11-29 07:52:19.741311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.069 [2024-11-29 07:52:19.766064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.069 [2024-11-29 07:52:19.766217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:30.069 [2024-11-29 07:52:19.766275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.690 ms 00:20:30.069 [2024-11-29 07:52:19.766298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.069 [2024-11-29 07:52:19.790874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.069 [2024-11-29 07:52:19.791025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:30.069 [2024-11-29 07:52:19.791082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:20:30.069 [2024-11-29 07:52:19.791103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.069 [2024-11-29 07:52:19.815694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.069 [2024-11-29 07:52:19.815844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:30.069 [2024-11-29 07:52:19.815900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.466 ms 00:20:30.069 [2024-11-29 07:52:19.815921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.069 [2024-11-29 07:52:19.816028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:30.069 [2024-11-29 07:52:19.816061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:30.069 [2024-11-29 07:52:19.816383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.816975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.817968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:30.070 [2024-11-29 07:52:19.818551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:30.071 [2024-11-29 07:52:19.818559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:30.071 [2024-11-29 07:52:19.818575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:30.071 [2024-11-29 07:52:19.818583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:20:30.071 [2024-11-29 07:52:19.818591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:30.071 [2024-11-29 07:52:19.818602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:30.071 [2024-11-29 07:52:19.818611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:30.071 [2024-11-29 07:52:19.818619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:30.071 [2024-11-29 07:52:19.818626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:30.071 [2024-11-29 07:52:19.818634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:30.071 [2024-11-29 07:52:19.818645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:30.071 [2024-11-29 07:52:19.818651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:30.071 [2024-11-29 07:52:19.818658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:30.071 [2024-11-29 07:52:19.818666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.071 [2024-11-29 07:52:19.818673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:30.071 [2024-11-29 07:52:19.818682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:20:30.071 [2024-11-29 07:52:19.818690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.832265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.071 [2024-11-29 07:52:19.832397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:30.071 [2024-11-29 07:52:19.832482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.549 ms 00:20:30.071 [2024-11-29 07:52:19.832589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.833026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.071 [2024-11-29 07:52:19.833072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:30.071 [2024-11-29 07:52:19.833138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:20:30.071 [2024-11-29 07:52:19.833418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.871886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.071 [2024-11-29 07:52:19.872045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.071 [2024-11-29 07:52:19.872104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.071 [2024-11-29 07:52:19.872138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.872246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.071 [2024-11-29 07:52:19.872271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.071 [2024-11-29 07:52:19.872291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.071 [2024-11-29 07:52:19.872311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.872375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.071 [2024-11-29 07:52:19.872399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.071 [2024-11-29 07:52:19.872420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.071 [2024-11-29 07:52:19.872522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.872566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.071 [2024-11-29 07:52:19.872661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.071 [2024-11-29 07:52:19.872723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.071 [2024-11-29 07:52:19.872747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.071 [2024-11-29 07:52:19.958125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.071 [2024-11-29 07:52:19.958332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.071 [2024-11-29 07:52:19.958392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.071 [2024-11-29 07:52:19.958416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.027666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.027875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.332 [2024-11-29 07:52:20.027938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.027963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.028065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.332 [2024-11-29 07:52:20.028086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.028105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.028178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.332 [2024-11-29 07:52:20.028199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.028265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.028423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.332 [2024-11-29 07:52:20.028459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.028481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.028562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.332 [2024-11-29 07:52:20.028588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.028607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.028742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.332 [2024-11-29 07:52:20.028766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.028786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.028851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.332 [2024-11-29 07:52:20.029051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.332 [2024-11-29 07:52:20.029071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.332 [2024-11-29 07:52:20.029092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.332 [2024-11-29 07:52:20.029322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.306 ms, result 0 00:20:30.903 00:20:30.903 00:20:30.903 07:52:20 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:30.903 07:52:20 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:31.475 07:52:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.736 [2024-11-29 07:52:21.441953] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:31.736 [2024-11-29 07:52:21.442105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76890 ] 00:20:31.736 [2024-11-29 07:52:21.604318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.998 [2024-11-29 07:52:21.717416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.261 [2024-11-29 07:52:21.997630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.261 [2024-11-29 07:52:21.997720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.261 [2024-11-29 07:52:22.161240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.161299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.261 [2024-11-29 07:52:22.161314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.261 [2024-11-29 07:52:22.161323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.164302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.164354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.261 [2024-11-29 07:52:22.164365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:20:32.261 [2024-11-29 07:52:22.164373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.164525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.261 [2024-11-29 07:52:22.165823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.261 [2024-11-29 07:52:22.165880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.165892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.261 [2024-11-29 07:52:22.165903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:20:32.261 [2024-11-29 07:52:22.165912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.167740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:32.261 [2024-11-29 07:52:22.182226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.182273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:32.261 [2024-11-29 07:52:22.182286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.488 ms 00:20:32.261 [2024-11-29 07:52:22.182295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.182413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.182425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:32.261 [2024-11-29 07:52:22.182435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:32.261 [2024-11-29 07:52:22.182465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.190391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.190433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.261 [2024-11-29 07:52:22.190464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:20:32.261 [2024-11-29 07:52:22.190474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.190582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.190593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.261 [2024-11-29 07:52:22.190602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:32.261 [2024-11-29 07:52:22.190610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.190641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.190653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.261 [2024-11-29 07:52:22.190665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:32.261 [2024-11-29 07:52:22.190676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.190707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:32.261 [2024-11-29 07:52:22.194781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.194822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.261 [2024-11-29 07:52:22.194833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:20:32.261 [2024-11-29 07:52:22.194841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.194919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.194930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.261 [2024-11-29 07:52:22.194939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:32.261 [2024-11-29 07:52:22.194947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.194972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:32.261 [2024-11-29 07:52:22.194995] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:32.261 [2024-11-29 07:52:22.195032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:32.261 [2024-11-29 07:52:22.195047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:32.261 [2024-11-29 07:52:22.195153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:32.261 [2024-11-29 07:52:22.195164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.261 [2024-11-29 07:52:22.195175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:32.261 [2024-11-29 07:52:22.195189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.261 [2024-11-29 07:52:22.195198] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.261 [2024-11-29 07:52:22.195207] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:32.261 [2024-11-29 07:52:22.195215] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.261 [2024-11-29 07:52:22.195223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:32.261 [2024-11-29 07:52:22.195231] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:32.261 [2024-11-29 07:52:22.195239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.195247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.261 [2024-11-29 07:52:22.195256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:32.261 [2024-11-29 07:52:22.195263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.195351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.261 [2024-11-29 07:52:22.195363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.261 [2024-11-29 07:52:22.195371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:32.261 [2024-11-29 07:52:22.195378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.261 [2024-11-29 07:52:22.195512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.261 [2024-11-29 07:52:22.195531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.261 [2024-11-29 07:52:22.195545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.261 [2024-11-29 07:52:22.195558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.261 [2024-11-29 07:52:22.195569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.261 [2024-11-29 07:52:22.195579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.261 [2024-11-29 07:52:22.195591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:32.261 [2024-11-29 07:52:22.195601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.261 [2024-11-29 07:52:22.195612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:32.261 [2024-11-29 07:52:22.195623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.261 [2024-11-29 07:52:22.195635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.261 [2024-11-29 07:52:22.195656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:32.261 [2024-11-29 07:52:22.195667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.261 [2024-11-29 07:52:22.195677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.261 [2024-11-29 07:52:22.195689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:32.261 [2024-11-29 07:52:22.195701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.261 [2024-11-29 07:52:22.195713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.261 [2024-11-29 07:52:22.195729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:32.262 [2024-11-29 07:52:22.195748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.262 [2024-11-29 07:52:22.195774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.262 [2024-11-29 07:52:22.195798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.262 [2024-11-29 07:52:22.195814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.262 [2024-11-29 07:52:22.195837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.262 [2024-11-29 07:52:22.195848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.262 [2024-11-29 07:52:22.195870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.262 [2024-11-29 07:52:22.195882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.262 [2024-11-29 07:52:22.195903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.262 [2024-11-29 07:52:22.195915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:32.262 [2024-11-29 07:52:22.195925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.262 [2024-11-29 07:52:22.195937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.262 [2024-11-29 07:52:22.195949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:32.262 [2024-11-29 07:52:22.195961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.262 [2024-11-29 07:52:22.195979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:32.262 [2024-11-29 07:52:22.195990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:32.262 [2024-11-29 07:52:22.196002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.262 [2024-11-29 07:52:22.196013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:32.262 [2024-11-29 07:52:22.196026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:32.262 [2024-11-29 07:52:22.196042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.262 [2024-11-29 07:52:22.196054] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.262 [2024-11-29 07:52:22.196063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.262 [2024-11-29 07:52:22.196076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.262 [2024-11-29 07:52:22.196084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.262 [2024-11-29 07:52:22.196092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.262 [2024-11-29 07:52:22.196100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.262 [2024-11-29 07:52:22.196109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.262 [2024-11-29 07:52:22.196116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.262 [2024-11-29 07:52:22.196123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.262 [2024-11-29 07:52:22.196130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.262 [2024-11-29 07:52:22.196140] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.262 [2024-11-29 07:52:22.196151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:32.262 [2024-11-29 07:52:22.196168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:32.262 [2024-11-29 07:52:22.196175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:32.262 [2024-11-29 07:52:22.196182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:32.262 [2024-11-29 07:52:22.196189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:32.262 [2024-11-29 07:52:22.196196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:32.262 [2024-11-29 07:52:22.196204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:32.262 [2024-11-29 07:52:22.196211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:32.262 [2024-11-29 07:52:22.196218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:32.262 [2024-11-29 07:52:22.196224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:32.262 [2024-11-29 07:52:22.196260] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.262 [2024-11-29 07:52:22.196269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.262 [2024-11-29 07:52:22.196286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.262 [2024-11-29 07:52:22.196293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.262 [2024-11-29 07:52:22.196300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.262 [2024-11-29 07:52:22.196308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.262 [2024-11-29 07:52:22.196319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.262 [2024-11-29 07:52:22.196326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:20:32.262 [2024-11-29 07:52:22.196334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.228547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.228598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.524 [2024-11-29 07:52:22.228611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.146 ms 00:20:32.524 [2024-11-29 07:52:22.228619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.228761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.228773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.524 [2024-11-29 07:52:22.228783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:32.524 [2024-11-29 07:52:22.228791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.275630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.275679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.524 [2024-11-29 07:52:22.275696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.815 ms 00:20:32.524 [2024-11-29 07:52:22.275705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.275814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.275826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.524 [2024-11-29 07:52:22.275837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.524 [2024-11-29 07:52:22.275845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.276358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.276379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.524 [2024-11-29 07:52:22.276398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:20:32.524 [2024-11-29 07:52:22.276406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.276582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.276593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.524 [2024-11-29 07:52:22.276602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:32.524 [2024-11-29 07:52:22.276609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.292822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.292865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.524 [2024-11-29 07:52:22.292892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.187 ms 00:20:32.524 [2024-11-29 07:52:22.292905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.307323] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:32.524 [2024-11-29 07:52:22.307371] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.524 [2024-11-29 07:52:22.307386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.307395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.524 [2024-11-29 07:52:22.307404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.368 ms 00:20:32.524 [2024-11-29 07:52:22.307412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.337964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.338155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.524 [2024-11-29 07:52:22.338177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.435 ms 00:20:32.524 [2024-11-29 07:52:22.338186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.351307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.351354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.524 [2024-11-29 07:52:22.351366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:20:32.524 [2024-11-29 07:52:22.351374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.364128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.364174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.524 [2024-11-29 07:52:22.364186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.635 ms 00:20:32.524 [2024-11-29 07:52:22.364193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.524 [2024-11-29 07:52:22.364863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.524 [2024-11-29 07:52:22.364910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.524 [2024-11-29 07:52:22.364922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:32.524 [2024-11-29 07:52:22.364929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.430210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.430276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.525 [2024-11-29 07:52:22.430290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.249 ms 00:20:32.525 [2024-11-29 07:52:22.430300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.441781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:32.525 [2024-11-29 07:52:22.460976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.461029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.525 [2024-11-29 07:52:22.461043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.566 ms 00:20:32.525 [2024-11-29 07:52:22.461059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.461162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.461175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.525 [2024-11-29 07:52:22.461185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:32.525 [2024-11-29 07:52:22.461193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.461253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.461263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.525 [2024-11-29 07:52:22.461272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:32.525 [2024-11-29 07:52:22.461285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.461317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.461327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.525 [2024-11-29 07:52:22.461336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:32.525 [2024-11-29 07:52:22.461344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.525 [2024-11-29 07:52:22.461384] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.525 [2024-11-29 07:52:22.461395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.525 [2024-11-29 07:52:22.461404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.525 [2024-11-29 07:52:22.461413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:32.525 [2024-11-29 07:52:22.461421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.787 [2024-11-29 07:52:22.488051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.787 [2024-11-29 07:52:22.488230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.787 [2024-11-29 07:52:22.488253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.607 ms 00:20:32.787 [2024-11-29 07:52:22.488262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.787 [2024-11-29 07:52:22.488388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.787 [2024-11-29 07:52:22.488401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.787 [2024-11-29 07:52:22.488412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:32.787 [2024-11-29 07:52:22.488421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.787 [2024-11-29 07:52:22.489706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.787 [2024-11-29 07:52:22.493154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.106 ms, result 0 00:20:32.787 [2024-11-29 07:52:22.494360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.787 [2024-11-29 07:52:22.507877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.050  [2024-11-29T07:52:22.994Z] Copying: 4096/4096 [kB] (average 12 MBps)[2024-11-29 07:52:22.822380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.050 [2024-11-29 07:52:22.831547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.831592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.050 [2024-11-29 07:52:22.831613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.050 [2024-11-29 07:52:22.831622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.831646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.050 [2024-11-29 07:52:22.834726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.834770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.050 [2024-11-29 07:52:22.834782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:33.050 [2024-11-29 07:52:22.834790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.837803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.837967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.050 [2024-11-29 07:52:22.837987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:20:33.050 [2024-11-29 07:52:22.837995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.842615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.842653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.050 [2024-11-29 07:52:22.842664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.591 ms 00:20:33.050 [2024-11-29 07:52:22.842672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.849610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.849652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.050 [2024-11-29 07:52:22.849663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:20:33.050 [2024-11-29 07:52:22.849671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.875240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.875291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.050 [2024-11-29 07:52:22.875302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.517 ms 00:20:33.050 [2024-11-29 07:52:22.875311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.891782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.891834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.050 [2024-11-29 07:52:22.891848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.404 ms 00:20:33.050 [2024-11-29 07:52:22.891857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.892017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.892029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.050 [2024-11-29 07:52:22.892048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:33.050 [2024-11-29 07:52:22.892055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.917956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.918002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.050 [2024-11-29 07:52:22.918014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.883 ms 00:20:33.050 [2024-11-29 07:52:22.918022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.947394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.947637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.050 [2024-11-29 07:52:22.947716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.321 ms 00:20:33.050 [2024-11-29 07:52:22.947741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.050 [2024-11-29 07:52:22.972046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.050 [2024-11-29 07:52:22.972225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.050 [2024-11-29 07:52:22.972294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.230 ms 00:20:33.050 [2024-11-29 07:52:22.972319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.313 [2024-11-29 07:52:22.996951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.313 [2024-11-29 07:52:22.997143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.313 [2024-11-29 07:52:22.997203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.216 ms 00:20:33.313 [2024-11-29 07:52:22.997227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.313 [2024-11-29 07:52:22.997278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.313 [2024-11-29 07:52:22.997309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.997975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.998933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.999007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.999039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.313 [2024-11-29 07:52:22.999069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:22.999992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.314 [2024-11-29 07:52:23.000315] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.314 [2024-11-29 07:52:23.000324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:20:33.314 [2024-11-29 07:52:23.000334] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.314 [2024-11-29 07:52:23.000341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.314 [2024-11-29 07:52:23.000349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.314 [2024-11-29 07:52:23.000359] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.314 [2024-11-29 07:52:23.000367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.315 [2024-11-29 07:52:23.000376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.315 [2024-11-29 07:52:23.000388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.315 [2024-11-29 07:52:23.000395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.315 [2024-11-29 07:52:23.000402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.315 [2024-11-29 07:52:23.000411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.315 [2024-11-29 07:52:23.000420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.315 [2024-11-29 07:52:23.000430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:20:33.315 [2024-11-29 07:52:23.000437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.014950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.315 [2024-11-29 07:52:23.015100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.315 [2024-11-29 07:52:23.015156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.262 ms 00:20:33.315 [2024-11-29 07:52:23.015179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.015660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.315 [2024-11-29 07:52:23.015711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.315 [2024-11-29 07:52:23.016104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:33.315 [2024-11-29 07:52:23.016117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.056205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.056265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.315 [2024-11-29 07:52:23.056279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.056296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.056401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.056411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.315 [2024-11-29 07:52:23.056420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.056429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.056506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.056517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.315 [2024-11-29 07:52:23.056526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.056535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.056559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.056569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.315 [2024-11-29 07:52:23.056577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.056585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.143082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.143350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.315 [2024-11-29 07:52:23.143374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.143391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.315 [2024-11-29 07:52:23.213260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.315 [2024-11-29 07:52:23.213358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.315 [2024-11-29 07:52:23.213425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.315 [2024-11-29 07:52:23.213622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.315 [2024-11-29 07:52:23.213726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.315 [2024-11-29 07:52:23.213826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.213888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.315 [2024-11-29 07:52:23.213903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.315 [2024-11-29 07:52:23.213912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.315 [2024-11-29 07:52:23.213920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.315 [2024-11-29 07:52:23.214080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.518 ms, result 0 00:20:34.257 00:20:34.257 00:20:34.257 07:52:23 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76925 00:20:34.257 07:52:23 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:34.257 07:52:23 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76925 00:20:34.257 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76925 ']' 00:20:34.257 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.258 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.258 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.258 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.258 07:52:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:34.258 [2024-11-29 07:52:24.074688] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:34.258 [2024-11-29 07:52:24.074830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76925 ] 00:20:34.519 [2024-11-29 07:52:24.239588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.519 [2024-11-29 07:52:24.361035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.462 07:52:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.462 07:52:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:35.462 07:52:25 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:35.462 [2024-11-29 07:52:25.266019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.462 [2024-11-29 07:52:25.266109] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.723 [2024-11-29 07:52:25.445422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.723 [2024-11-29 07:52:25.445504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.724 [2024-11-29 07:52:25.445522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.724 [2024-11-29 07:52:25.445531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.448574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.448788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.724 [2024-11-29 07:52:25.448813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:20:35.724 [2024-11-29 07:52:25.448822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.449156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.724 [2024-11-29 07:52:25.450061] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.724 [2024-11-29 07:52:25.450104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.450113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.724 [2024-11-29 07:52:25.450126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:20:35.724 [2024-11-29 07:52:25.450136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.452059] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.724 [2024-11-29 07:52:25.466705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.466765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.724 [2024-11-29 07:52:25.466780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.654 ms 00:20:35.724 [2024-11-29 07:52:25.466791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.466914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.466928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.724 [2024-11-29 07:52:25.466939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:35.724 [2024-11-29 07:52:25.466948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.475464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.475517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.724 [2024-11-29 07:52:25.475528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.462 ms 00:20:35.724 [2024-11-29 07:52:25.475538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.475661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.475674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.724 [2024-11-29 07:52:25.475683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:35.724 [2024-11-29 07:52:25.475698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.475723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.475733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.724 [2024-11-29 07:52:25.475742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:35.724 [2024-11-29 07:52:25.475751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.475775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:35.724 [2024-11-29 07:52:25.479885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.479928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.724 [2024-11-29 07:52:25.479942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.113 ms 00:20:35.724 [2024-11-29 07:52:25.479950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.480033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.480042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.724 [2024-11-29 07:52:25.480057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:35.724 [2024-11-29 07:52:25.480065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.480089] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.724 [2024-11-29 07:52:25.480110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.724 [2024-11-29 07:52:25.480155] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.724 [2024-11-29 07:52:25.480171] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.724 [2024-11-29 07:52:25.480282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.724 [2024-11-29 07:52:25.480293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.724 [2024-11-29 07:52:25.480313] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.724 [2024-11-29 07:52:25.480324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480346] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:35.724 [2024-11-29 07:52:25.480355] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.724 [2024-11-29 07:52:25.480363] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.724 [2024-11-29 07:52:25.480378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.724 [2024-11-29 07:52:25.480387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.480396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.724 [2024-11-29 07:52:25.480403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:35.724 [2024-11-29 07:52:25.480414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.480526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-11-29 07:52:25.480537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.724 [2024-11-29 07:52:25.480545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:35.724 [2024-11-29 07:52:25.480555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-11-29 07:52:25.480660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.724 [2024-11-29 07:52:25.480673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.724 [2024-11-29 07:52:25.480682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.724 [2024-11-29 07:52:25.480709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.724 [2024-11-29 07:52:25.480736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.724 [2024-11-29 07:52:25.480751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.724 [2024-11-29 07:52:25.480761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:35.724 [2024-11-29 07:52:25.480767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.724 [2024-11-29 07:52:25.480776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.724 [2024-11-29 07:52:25.480783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:35.724 [2024-11-29 07:52:25.480791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.724 [2024-11-29 07:52:25.480807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.724 [2024-11-29 07:52:25.480838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.724 [2024-11-29 07:52:25.480865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.724 [2024-11-29 07:52:25.480920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.724 [2024-11-29 07:52:25.480945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.724 [2024-11-29 07:52:25.480960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.724 [2024-11-29 07:52:25.480967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:35.724 [2024-11-29 07:52:25.480978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.724 [2024-11-29 07:52:25.480985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.724 [2024-11-29 07:52:25.480995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:35.724 [2024-11-29 07:52:25.481001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.724 [2024-11-29 07:52:25.481010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.724 [2024-11-29 07:52:25.481017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:35.724 [2024-11-29 07:52:25.481027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.724 [2024-11-29 07:52:25.481034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.724 [2024-11-29 07:52:25.481043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:35.725 [2024-11-29 07:52:25.481050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-11-29 07:52:25.481058] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.725 [2024-11-29 07:52:25.481068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.725 [2024-11-29 07:52:25.481078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.725 [2024-11-29 07:52:25.481085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-11-29 07:52:25.481095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.725 [2024-11-29 07:52:25.481102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.725 [2024-11-29 07:52:25.481111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.725 [2024-11-29 07:52:25.481118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.725 [2024-11-29 07:52:25.481131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.725 [2024-11-29 07:52:25.481139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.725 [2024-11-29 07:52:25.481149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.725 [2024-11-29 07:52:25.481159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:35.725 [2024-11-29 07:52:25.481182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:35.725 [2024-11-29 07:52:25.481192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:35.725 [2024-11-29 07:52:25.481200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:35.725 [2024-11-29 07:52:25.481210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:35.725 [2024-11-29 07:52:25.481217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:35.725 [2024-11-29 07:52:25.481228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:35.725 [2024-11-29 07:52:25.481235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:35.725 [2024-11-29 07:52:25.481245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:35.725 [2024-11-29 07:52:25.481253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:35.725 [2024-11-29 07:52:25.481300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.725 [2024-11-29 07:52:25.481308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.725 [2024-11-29 07:52:25.481328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.725 [2024-11-29 07:52:25.481338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.725 [2024-11-29 07:52:25.481346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.725 [2024-11-29 07:52:25.481355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.481363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.725 [2024-11-29 07:52:25.481372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:20:35.725 [2024-11-29 07:52:25.481382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.513969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.514194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.725 [2024-11-29 07:52:25.514219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.526 ms 00:20:35.725 [2024-11-29 07:52:25.514232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.514378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.514390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.725 [2024-11-29 07:52:25.514401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:35.725 [2024-11-29 07:52:25.514409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.549652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.549702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.725 [2024-11-29 07:52:25.549716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.217 ms 00:20:35.725 [2024-11-29 07:52:25.549724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.549818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.549828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.725 [2024-11-29 07:52:25.549840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.725 [2024-11-29 07:52:25.549848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.550390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.550427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.725 [2024-11-29 07:52:25.550441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:20:35.725 [2024-11-29 07:52:25.550470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.550620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.550629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.725 [2024-11-29 07:52:25.550640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:35.725 [2024-11-29 07:52:25.550648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.568696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.568912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.725 [2024-11-29 07:52:25.568938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.021 ms 00:20:35.725 [2024-11-29 07:52:25.568946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.597873] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.725 [2024-11-29 07:52:25.598092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.725 [2024-11-29 07:52:25.598126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.598136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.725 [2024-11-29 07:52:25.598151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.053 ms 00:20:35.725 [2024-11-29 07:52:25.598168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.625131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.625337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.725 [2024-11-29 07:52:25.625370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.518 ms 00:20:35.725 [2024-11-29 07:52:25.625379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.638579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.638629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.725 [2024-11-29 07:52:25.638647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.984 ms 00:20:35.725 [2024-11-29 07:52:25.638655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.651790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.651974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.725 [2024-11-29 07:52:25.652002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:20:35.725 [2024-11-29 07:52:25.652010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-11-29 07:52:25.652731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-11-29 07:52:25.652766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.725 [2024-11-29 07:52:25.652780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:20:35.725 [2024-11-29 07:52:25.652788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.719793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-11-29 07:52:25.719859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.985 [2024-11-29 07:52:25.719878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.972 ms 00:20:35.985 [2024-11-29 07:52:25.719887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.731359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:35.985 [2024-11-29 07:52:25.751213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-11-29 07:52:25.751280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.985 [2024-11-29 07:52:25.751294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.217 ms 00:20:35.985 [2024-11-29 07:52:25.751305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.751404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-11-29 07:52:25.751417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.985 [2024-11-29 07:52:25.751427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:35.985 [2024-11-29 07:52:25.751437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.751530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-11-29 07:52:25.751542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.985 [2024-11-29 07:52:25.751551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:35.985 [2024-11-29 07:52:25.751564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.751590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-11-29 07:52:25.751603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.985 [2024-11-29 07:52:25.751612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.985 [2024-11-29 07:52:25.751622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-11-29 07:52:25.751657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.985 [2024-11-29 07:52:25.751673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-11-29 07:52:25.751685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.986 [2024-11-29 07:52:25.751695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:35.986 [2024-11-29 07:52:25.751704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-11-29 07:52:25.777801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-11-29 07:52:25.777991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.986 [2024-11-29 07:52:25.778020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.065 ms 00:20:35.986 [2024-11-29 07:52:25.778030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-11-29 07:52:25.778161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-11-29 07:52:25.778173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.986 [2024-11-29 07:52:25.778189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:35.986 [2024-11-29 07:52:25.778197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-11-29 07:52:25.779353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.986 [2024-11-29 07:52:25.782964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.598 ms, result 0 00:20:35.986 [2024-11-29 07:52:25.785120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.986 Some configs were skipped because the RPC state that can call them passed over. 00:20:35.986 07:52:25 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:36.246 [2024-11-29 07:52:26.030047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.246 [2024-11-29 07:52:26.030125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.246 [2024-11-29 07:52:26.030140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:20:36.246 [2024-11-29 07:52:26.030152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.246 [2024-11-29 07:52:26.030190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.236 ms, result 0 00:20:36.246 true 00:20:36.246 07:52:26 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:36.507 [2024-11-29 07:52:26.241656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.507 [2024-11-29 07:52:26.241720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.507 [2024-11-29 07:52:26.241736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.434 ms 00:20:36.507 [2024-11-29 07:52:26.241745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.507 [2024-11-29 07:52:26.241787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.574 ms, result 0 00:20:36.507 true 00:20:36.507 07:52:26 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76925 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76925 ']' 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76925 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76925 00:20:36.507 killing process with pid 76925 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.507 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.508 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76925' 00:20:36.508 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76925 00:20:36.508 07:52:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76925 00:20:37.453 [2024-11-29 07:52:27.182978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.453 [2024-11-29 07:52:27.183076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.453 [2024-11-29 07:52:27.183095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:37.453 [2024-11-29 07:52:27.183107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.453 [2024-11-29 07:52:27.183138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.454 [2024-11-29 07:52:27.186571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.186626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.454 [2024-11-29 07:52:27.186645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:20:37.454 [2024-11-29 07:52:27.186655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.186990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.187003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.454 [2024-11-29 07:52:27.187016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:37.454 [2024-11-29 07:52:27.187026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.191753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.191808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.454 [2024-11-29 07:52:27.191823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:20:37.454 [2024-11-29 07:52:27.191833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.198835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.198885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.454 [2024-11-29 07:52:27.198900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:20:37.454 [2024-11-29 07:52:27.198909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.210287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.210638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.454 [2024-11-29 07:52:27.210671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.293 ms 00:20:37.454 [2024-11-29 07:52:27.210680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.221122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.221180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.454 [2024-11-29 07:52:27.221197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.258 ms 00:20:37.454 [2024-11-29 07:52:27.221206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.221361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.221371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.454 [2024-11-29 07:52:27.221385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:37.454 [2024-11-29 07:52:27.221394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.233381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.233613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.454 [2024-11-29 07:52:27.233643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.957 ms 00:20:37.454 [2024-11-29 07:52:27.233652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.245018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.245230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.454 [2024-11-29 07:52:27.245260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.175 ms 00:20:37.454 [2024-11-29 07:52:27.245268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.255776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.255975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.454 [2024-11-29 07:52:27.256002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.317 ms 00:20:37.454 [2024-11-29 07:52:27.256010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.274833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.454 [2024-11-29 07:52:27.275052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.454 [2024-11-29 07:52:27.275080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 00:20:37.454 [2024-11-29 07:52:27.275088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.454 [2024-11-29 07:52:27.275341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.454 [2024-11-29 07:52:27.275376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.454 [2024-11-29 07:52:27.275939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.275994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.455 [2024-11-29 07:52:27.276405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.455 [2024-11-29 07:52:27.276421] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:20:37.455 [2024-11-29 07:52:27.276434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.455 [2024-11-29 07:52:27.276458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.455 [2024-11-29 07:52:27.276467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.455 [2024-11-29 07:52:27.276478] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.455 [2024-11-29 07:52:27.276490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.455 [2024-11-29 07:52:27.276502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.455 [2024-11-29 07:52:27.276510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.455 [2024-11-29 07:52:27.276520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.455 [2024-11-29 07:52:27.276528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.455 [2024-11-29 07:52:27.276538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.455 [2024-11-29 07:52:27.276547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.455 [2024-11-29 07:52:27.276564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:20:37.455 [2024-11-29 07:52:27.276575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.291349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.455 [2024-11-29 07:52:27.291395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.455 [2024-11-29 07:52:27.291413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.733 ms 00:20:37.455 [2024-11-29 07:52:27.291423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.291913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.455 [2024-11-29 07:52:27.291929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.455 [2024-11-29 07:52:27.291945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:37.455 [2024-11-29 07:52:27.291955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.344763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.455 [2024-11-29 07:52:27.344808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.455 [2024-11-29 07:52:27.344823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.455 [2024-11-29 07:52:27.344833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.344957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.455 [2024-11-29 07:52:27.344970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.455 [2024-11-29 07:52:27.344985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.455 [2024-11-29 07:52:27.344994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.345051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.455 [2024-11-29 07:52:27.345063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.455 [2024-11-29 07:52:27.345076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.455 [2024-11-29 07:52:27.345085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.455 [2024-11-29 07:52:27.345107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.455 [2024-11-29 07:52:27.345117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.455 [2024-11-29 07:52:27.345128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.455 [2024-11-29 07:52:27.345139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.437309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.437373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.717 [2024-11-29 07:52:27.437390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.437400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.717 [2024-11-29 07:52:27.512107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.717 [2024-11-29 07:52:27.512242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.717 [2024-11-29 07:52:27.512315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.717 [2024-11-29 07:52:27.512494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.717 [2024-11-29 07:52:27.512570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.717 [2024-11-29 07:52:27.512686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.512760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.717 [2024-11-29 07:52:27.512772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.717 [2024-11-29 07:52:27.512786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.717 [2024-11-29 07:52:27.512795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.717 [2024-11-29 07:52:27.513003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.992 ms, result 0 00:20:38.660 07:52:28 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.660 [2024-11-29 07:52:28.345146] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:38.660 [2024-11-29 07:52:28.345587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76979 ] 00:20:38.660 [2024-11-29 07:52:28.511016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.922 [2024-11-29 07:52:28.637083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.183 [2024-11-29 07:52:28.938588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.183 [2024-11-29 07:52:28.938679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.183 [2024-11-29 07:52:29.101465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.183 [2024-11-29 07:52:29.101528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.183 [2024-11-29 07:52:29.101543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.183 [2024-11-29 07:52:29.101552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.183 [2024-11-29 07:52:29.109167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.183 [2024-11-29 07:52:29.109478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.183 [2024-11-29 07:52:29.109506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.582 ms 00:20:39.183 [2024-11-29 07:52:29.109519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.183 [2024-11-29 07:52:29.110058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.183 [2024-11-29 07:52:29.110980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.183 [2024-11-29 07:52:29.111026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.183 [2024-11-29 07:52:29.111038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.183 [2024-11-29 07:52:29.111051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:20:39.183 [2024-11-29 07:52:29.111061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.183 [2024-11-29 07:52:29.113532] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.446 [2024-11-29 07:52:29.128986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.446 [2024-11-29 07:52:29.129038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.446 [2024-11-29 07:52:29.129053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.456 ms 00:20:39.446 [2024-11-29 07:52:29.129063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.446 [2024-11-29 07:52:29.129193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.446 [2024-11-29 07:52:29.129209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.446 [2024-11-29 07:52:29.129220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:39.446 [2024-11-29 07:52:29.129230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.446 [2024-11-29 07:52:29.140551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.446 [2024-11-29 07:52:29.140592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.446 [2024-11-29 07:52:29.140603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.270 ms 00:20:39.446 [2024-11-29 07:52:29.140612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.446 [2024-11-29 07:52:29.140745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.446 [2024-11-29 07:52:29.140758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.446 [2024-11-29 07:52:29.140769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:39.446 [2024-11-29 07:52:29.140779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.446 [2024-11-29 07:52:29.140812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.446 [2024-11-29 07:52:29.140823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.446 [2024-11-29 07:52:29.140832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:39.447 [2024-11-29 07:52:29.140841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.447 [2024-11-29 07:52:29.140880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:39.447 [2024-11-29 07:52:29.145453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.447 [2024-11-29 07:52:29.145492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.447 [2024-11-29 07:52:29.145503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:20:39.447 [2024-11-29 07:52:29.145512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.447 [2024-11-29 07:52:29.145576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.447 [2024-11-29 07:52:29.145587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.447 [2024-11-29 07:52:29.145597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:39.447 [2024-11-29 07:52:29.145606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.447 [2024-11-29 07:52:29.145631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.447 [2024-11-29 07:52:29.145657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.447 [2024-11-29 07:52:29.145700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.447 [2024-11-29 07:52:29.145718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:39.447 [2024-11-29 07:52:29.145831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.447 [2024-11-29 07:52:29.145855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.447 [2024-11-29 07:52:29.145867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.447 [2024-11-29 07:52:29.145884] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.447 [2024-11-29 07:52:29.145894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.447 [2024-11-29 07:52:29.145903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:39.447 [2024-11-29 07:52:29.145911] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.447 [2024-11-29 07:52:29.145921] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.447 [2024-11-29 07:52:29.145930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.447 [2024-11-29 07:52:29.145939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.447 [2024-11-29 07:52:29.145948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.447 [2024-11-29 07:52:29.145957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:39.447 [2024-11-29 07:52:29.145966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.447 [2024-11-29 07:52:29.146057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.447 [2024-11-29 07:52:29.146071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.447 [2024-11-29 07:52:29.146080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:39.447 [2024-11-29 07:52:29.146088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.447 [2024-11-29 07:52:29.146201] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.447 [2024-11-29 07:52:29.146214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.447 [2024-11-29 07:52:29.146225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.447 [2024-11-29 07:52:29.146250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.447 [2024-11-29 07:52:29.146277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.447 [2024-11-29 07:52:29.146292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.447 [2024-11-29 07:52:29.146308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:39.447 [2024-11-29 07:52:29.146319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.447 [2024-11-29 07:52:29.146328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.447 [2024-11-29 07:52:29.146335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:39.447 [2024-11-29 07:52:29.146342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.447 [2024-11-29 07:52:29.146356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.447 [2024-11-29 07:52:29.146380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.447 [2024-11-29 07:52:29.146405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.447 [2024-11-29 07:52:29.146427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.447 [2024-11-29 07:52:29.146469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.447 [2024-11-29 07:52:29.146491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.447 [2024-11-29 07:52:29.146506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.447 [2024-11-29 07:52:29.146513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:39.447 [2024-11-29 07:52:29.146521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.447 [2024-11-29 07:52:29.146528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.447 [2024-11-29 07:52:29.146536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:39.447 [2024-11-29 07:52:29.146543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.447 [2024-11-29 07:52:29.146557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:39.447 [2024-11-29 07:52:29.146564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146572] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.447 [2024-11-29 07:52:29.146587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.447 [2024-11-29 07:52:29.146601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.447 [2024-11-29 07:52:29.146620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.447 [2024-11-29 07:52:29.146629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.447 [2024-11-29 07:52:29.146636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.447 [2024-11-29 07:52:29.146644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.447 [2024-11-29 07:52:29.146651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.447 [2024-11-29 07:52:29.146659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.447 [2024-11-29 07:52:29.146668] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.447 [2024-11-29 07:52:29.146678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.447 [2024-11-29 07:52:29.146688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:39.447 [2024-11-29 07:52:29.146695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:39.447 [2024-11-29 07:52:29.146702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:39.447 [2024-11-29 07:52:29.146710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:39.447 [2024-11-29 07:52:29.146717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:39.447 [2024-11-29 07:52:29.146724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:39.447 [2024-11-29 07:52:29.146732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:39.447 [2024-11-29 07:52:29.146740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:39.447 [2024-11-29 07:52:29.146750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:39.447 [2024-11-29 07:52:29.146757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:39.447 [2024-11-29 07:52:29.146765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:39.447 [2024-11-29 07:52:29.146775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:39.447 [2024-11-29 07:52:29.146783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:39.447 [2024-11-29 07:52:29.146792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:39.448 [2024-11-29 07:52:29.146799] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.448 [2024-11-29 07:52:29.146809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.448 [2024-11-29 07:52:29.146821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.448 [2024-11-29 07:52:29.146831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.448 [2024-11-29 07:52:29.146839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.448 [2024-11-29 07:52:29.146846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.448 [2024-11-29 07:52:29.146854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.146869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.448 [2024-11-29 07:52:29.146878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:20:39.448 [2024-11-29 07:52:29.146886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.185077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.185126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.448 [2024-11-29 07:52:29.185140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.132 ms 00:20:39.448 [2024-11-29 07:52:29.185150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.185292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.185306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.448 [2024-11-29 07:52:29.185317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:39.448 [2024-11-29 07:52:29.185327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.234540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.234597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.448 [2024-11-29 07:52:29.234615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.189 ms 00:20:39.448 [2024-11-29 07:52:29.234625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.234747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.234763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.448 [2024-11-29 07:52:29.234774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.448 [2024-11-29 07:52:29.234784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.235490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.235530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.448 [2024-11-29 07:52:29.235552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:20:39.448 [2024-11-29 07:52:29.235562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.235732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.235753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.448 [2024-11-29 07:52:29.235762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:39.448 [2024-11-29 07:52:29.235770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.254659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.254704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.448 [2024-11-29 07:52:29.254717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.863 ms 00:20:39.448 [2024-11-29 07:52:29.254726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.270182] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:39.448 [2024-11-29 07:52:29.270232] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.448 [2024-11-29 07:52:29.270247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.270257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.448 [2024-11-29 07:52:29.270268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.405 ms 00:20:39.448 [2024-11-29 07:52:29.270276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.296839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.296897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.448 [2024-11-29 07:52:29.296911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.467 ms 00:20:39.448 [2024-11-29 07:52:29.296922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.310163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.310210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.448 [2024-11-29 07:52:29.310222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.145 ms 00:20:39.448 [2024-11-29 07:52:29.310230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.323000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.323047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.448 [2024-11-29 07:52:29.323059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.680 ms 00:20:39.448 [2024-11-29 07:52:29.323067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.448 [2024-11-29 07:52:29.323768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.448 [2024-11-29 07:52:29.323803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.448 [2024-11-29 07:52:29.323815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:39.448 [2024-11-29 07:52:29.323824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.397035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.397095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.711 [2024-11-29 07:52:29.397112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.182 ms 00:20:39.711 [2024-11-29 07:52:29.397121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.410207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:39.711 [2024-11-29 07:52:29.434562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.434616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.711 [2024-11-29 07:52:29.434631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.338 ms 00:20:39.711 [2024-11-29 07:52:29.434648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.434755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.434769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.711 [2024-11-29 07:52:29.434780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:39.711 [2024-11-29 07:52:29.434790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.434861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.434873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.711 [2024-11-29 07:52:29.434882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:39.711 [2024-11-29 07:52:29.434896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.434933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.434944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.711 [2024-11-29 07:52:29.434953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:39.711 [2024-11-29 07:52:29.434962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.435003] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.711 [2024-11-29 07:52:29.435017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.435026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.711 [2024-11-29 07:52:29.435035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:39.711 [2024-11-29 07:52:29.435043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.461818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.461870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.711 [2024-11-29 07:52:29.461884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.749 ms 00:20:39.711 [2024-11-29 07:52:29.461895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.462022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.711 [2024-11-29 07:52:29.462036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.711 [2024-11-29 07:52:29.462046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:39.711 [2024-11-29 07:52:29.462055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.711 [2024-11-29 07:52:29.463324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.711 [2024-11-29 07:52:29.466781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.501 ms, result 0 00:20:39.711 [2024-11-29 07:52:29.468080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.711 [2024-11-29 07:52:29.481755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.727  [2024-11-29T07:52:31.640Z] Copying: 13/256 [MB] (13 MBps) [2024-11-29T07:52:32.584Z] Copying: 23/256 [MB] (10 MBps) [2024-11-29T07:52:33.970Z] Copying: 33/256 [MB] (10 MBps) [2024-11-29T07:52:34.543Z] Copying: 45/256 [MB] (11 MBps) [2024-11-29T07:52:35.928Z] Copying: 56/256 [MB] (11 MBps) [2024-11-29T07:52:36.869Z] Copying: 68/256 [MB] (11 MBps) [2024-11-29T07:52:37.812Z] Copying: 79/256 [MB] (10 MBps) [2024-11-29T07:52:38.753Z] Copying: 90/256 [MB] (11 MBps) [2024-11-29T07:52:39.693Z] Copying: 102/256 [MB] (11 MBps) [2024-11-29T07:52:40.632Z] Copying: 114/256 [MB] (11 MBps) [2024-11-29T07:52:41.578Z] Copying: 125/256 [MB] (11 MBps) [2024-11-29T07:52:42.962Z] Copying: 137/256 [MB] (11 MBps) [2024-11-29T07:52:43.902Z] Copying: 147/256 [MB] (10 MBps) [2024-11-29T07:52:44.842Z] Copying: 158/256 [MB] (10 MBps) [2024-11-29T07:52:45.794Z] Copying: 168/256 [MB] (10 MBps) [2024-11-29T07:52:46.738Z] Copying: 185/256 [MB] (16 MBps) [2024-11-29T07:52:47.680Z] Copying: 197/256 [MB] (12 MBps) [2024-11-29T07:52:48.624Z] Copying: 215/256 [MB] (17 MBps) [2024-11-29T07:52:49.565Z] Copying: 237/256 [MB] (22 MBps) [2024-11-29T07:52:49.826Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-29 07:52:49.684228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.882 [2024-11-29 07:52:49.696578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.696636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.882 [2024-11-29 07:52:49.696663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:59.882 [2024-11-29 07:52:49.696675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.696702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:59.882 [2024-11-29 07:52:49.700055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.700100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.882 [2024-11-29 07:52:49.700113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:20:59.882 [2024-11-29 07:52:49.700123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.700426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.700455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.882 [2024-11-29 07:52:49.700466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:59.882 [2024-11-29 07:52:49.700476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.704211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.704239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.882 [2024-11-29 07:52:49.704251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:20:59.882 [2024-11-29 07:52:49.704261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.711905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.711964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.882 [2024-11-29 07:52:49.711976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.624 ms 00:20:59.882 [2024-11-29 07:52:49.711985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.739472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.739523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.882 [2024-11-29 07:52:49.739538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.405 ms 00:20:59.882 [2024-11-29 07:52:49.739547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.758239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.758291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.882 [2024-11-29 07:52:49.758311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.620 ms 00:20:59.882 [2024-11-29 07:52:49.758319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.758512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.758528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.882 [2024-11-29 07:52:49.758551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:59.882 [2024-11-29 07:52:49.758559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.784440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.784510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:59.882 [2024-11-29 07:52:49.784523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.862 ms 00:20:59.882 [2024-11-29 07:52:49.784532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.882 [2024-11-29 07:52:49.810377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.882 [2024-11-29 07:52:49.810436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:59.882 [2024-11-29 07:52:49.810460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.755 ms 00:20:59.883 [2024-11-29 07:52:49.810467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.145 [2024-11-29 07:52:49.835630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.145 [2024-11-29 07:52:49.835676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.145 [2024-11-29 07:52:49.835688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.097 ms 00:21:00.145 [2024-11-29 07:52:49.835696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.145 [2024-11-29 07:52:49.860816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.145 [2024-11-29 07:52:49.860873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.145 [2024-11-29 07:52:49.860887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:21:00.145 [2024-11-29 07:52:49.860895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.145 [2024-11-29 07:52:49.860943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.145 [2024-11-29 07:52:49.860960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.860972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.860981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.860989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.860998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.145 [2024-11-29 07:52:49.861633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.146 [2024-11-29 07:52:49.861845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.146 [2024-11-29 07:52:49.861857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3615a4e7-f157-4413-91c6-683bd04f0ab0 00:21:00.146 [2024-11-29 07:52:49.861866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.146 [2024-11-29 07:52:49.861875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.146 [2024-11-29 07:52:49.861883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.146 [2024-11-29 07:52:49.861892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.146 [2024-11-29 07:52:49.861901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.146 [2024-11-29 07:52:49.861912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.146 [2024-11-29 07:52:49.861923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.146 [2024-11-29 07:52:49.861930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.146 [2024-11-29 07:52:49.861936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.146 [2024-11-29 07:52:49.861945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.146 [2024-11-29 07:52:49.861952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.146 [2024-11-29 07:52:49.861961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:21:00.146 [2024-11-29 07:52:49.861970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.876948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.146 [2024-11-29 07:52:49.876993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.146 [2024-11-29 07:52:49.877005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.944 ms 00:21:00.146 [2024-11-29 07:52:49.877013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.877470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.146 [2024-11-29 07:52:49.877495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.146 [2024-11-29 07:52:49.877506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:21:00.146 [2024-11-29 07:52:49.877518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.919681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.146 [2024-11-29 07:52:49.919730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.146 [2024-11-29 07:52:49.919742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.146 [2024-11-29 07:52:49.919758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.919866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.146 [2024-11-29 07:52:49.919877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.146 [2024-11-29 07:52:49.919886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.146 [2024-11-29 07:52:49.919895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.919958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.146 [2024-11-29 07:52:49.919971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.146 [2024-11-29 07:52:49.919980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.146 [2024-11-29 07:52:49.919989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:49.920012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.146 [2024-11-29 07:52:49.920022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.146 [2024-11-29 07:52:49.920031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.146 [2024-11-29 07:52:49.920039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.146 [2024-11-29 07:52:50.012662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.146 [2024-11-29 07:52:50.012732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.146 [2024-11-29 07:52:50.012747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.146 [2024-11-29 07:52:50.012757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.088717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.088799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.409 [2024-11-29 07:52:50.088815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.088826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.088967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.088982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.409 [2024-11-29 07:52:50.088993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.089059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.409 [2024-11-29 07:52:50.089068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.089205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.409 [2024-11-29 07:52:50.089216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.089282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.409 [2024-11-29 07:52:50.089296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.089386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.409 [2024-11-29 07:52:50.089396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.409 [2024-11-29 07:52:50.089512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.409 [2024-11-29 07:52:50.089522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.409 [2024-11-29 07:52:50.089531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.409 [2024-11-29 07:52:50.089732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.140 ms, result 0 00:21:00.980 00:21:00.980 00:21:00.980 07:52:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:01.552 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:01.552 07:52:51 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:01.815 07:52:51 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76925 00:21:01.815 Process with pid 76925 is not found 00:21:01.815 07:52:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76925 ']' 00:21:01.815 07:52:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76925 00:21:01.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76925) - No such process 00:21:01.815 07:52:51 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76925 is not found' 00:21:01.815 ************************************ 00:21:01.815 END TEST ftl_trim 00:21:01.815 ************************************ 00:21:01.815 00:21:01.815 real 1m24.281s 00:21:01.815 user 1m51.474s 00:21:01.815 sys 0m6.054s 00:21:01.815 07:52:51 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.815 07:52:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:01.815 07:52:51 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:01.815 07:52:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:01.815 07:52:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.815 07:52:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:01.815 ************************************ 00:21:01.815 START TEST ftl_restore 00:21:01.815 ************************************ 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:01.815 * Looking for test storage... 00:21:01.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.815 07:52:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:01.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.815 --rc genhtml_branch_coverage=1 00:21:01.815 --rc genhtml_function_coverage=1 00:21:01.815 --rc genhtml_legend=1 00:21:01.815 --rc geninfo_all_blocks=1 00:21:01.815 --rc geninfo_unexecuted_blocks=1 00:21:01.815 00:21:01.815 ' 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:01.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.815 --rc genhtml_branch_coverage=1 00:21:01.815 --rc genhtml_function_coverage=1 00:21:01.815 --rc genhtml_legend=1 00:21:01.815 --rc geninfo_all_blocks=1 00:21:01.815 --rc geninfo_unexecuted_blocks=1 00:21:01.815 00:21:01.815 ' 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:01.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.815 --rc genhtml_branch_coverage=1 00:21:01.815 --rc genhtml_function_coverage=1 00:21:01.815 --rc genhtml_legend=1 00:21:01.815 --rc geninfo_all_blocks=1 00:21:01.815 --rc geninfo_unexecuted_blocks=1 00:21:01.815 00:21:01.815 ' 00:21:01.815 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:01.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.815 --rc genhtml_branch_coverage=1 00:21:01.815 --rc genhtml_function_coverage=1 00:21:01.815 --rc genhtml_legend=1 00:21:01.815 --rc geninfo_all_blocks=1 00:21:01.815 --rc geninfo_unexecuted_blocks=1 00:21:01.815 00:21:01.815 ' 00:21:01.815 07:52:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:01.815 07:52:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.MgWcEHchFh 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77289 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77289 00:21:01.816 07:52:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77289 ']' 00:21:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.816 07:52:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:02.077 [2024-11-29 07:52:51.825662] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:02.078 [2024-11-29 07:52:51.825782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77289 ] 00:21:02.078 [2024-11-29 07:52:51.983725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.338 [2024-11-29 07:52:52.102811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.912 07:52:52 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.912 07:52:52 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:02.912 07:52:52 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:03.174 07:52:53 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:03.174 07:52:53 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:03.174 07:52:53 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:03.174 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:03.174 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.174 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:03.174 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:03.174 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.436 { 00:21:03.436 "name": "nvme0n1", 00:21:03.436 "aliases": [ 00:21:03.436 "859df27f-b9c4-455a-91ae-eea3640a00d4" 00:21:03.436 ], 00:21:03.436 "product_name": "NVMe disk", 00:21:03.436 "block_size": 4096, 00:21:03.436 "num_blocks": 1310720, 00:21:03.436 "uuid": "859df27f-b9c4-455a-91ae-eea3640a00d4", 00:21:03.436 "numa_id": -1, 00:21:03.436 "assigned_rate_limits": { 00:21:03.436 "rw_ios_per_sec": 0, 00:21:03.436 "rw_mbytes_per_sec": 0, 00:21:03.436 "r_mbytes_per_sec": 0, 00:21:03.436 "w_mbytes_per_sec": 0 00:21:03.436 }, 00:21:03.436 "claimed": true, 00:21:03.436 "claim_type": "read_many_write_one", 00:21:03.436 "zoned": false, 00:21:03.436 "supported_io_types": { 00:21:03.436 "read": true, 00:21:03.436 "write": true, 00:21:03.436 "unmap": true, 00:21:03.436 "flush": true, 00:21:03.436 "reset": true, 00:21:03.436 "nvme_admin": true, 00:21:03.436 "nvme_io": true, 00:21:03.436 "nvme_io_md": false, 00:21:03.436 "write_zeroes": true, 00:21:03.436 "zcopy": false, 00:21:03.436 "get_zone_info": false, 00:21:03.436 "zone_management": false, 00:21:03.436 "zone_append": false, 00:21:03.436 "compare": true, 00:21:03.436 "compare_and_write": false, 00:21:03.436 "abort": true, 00:21:03.436 "seek_hole": false, 00:21:03.436 "seek_data": false, 00:21:03.436 "copy": true, 00:21:03.436 "nvme_iov_md": false 00:21:03.436 }, 00:21:03.436 "driver_specific": { 00:21:03.436 "nvme": [ 00:21:03.436 { 00:21:03.436 "pci_address": "0000:00:11.0", 00:21:03.436 "trid": { 00:21:03.436 "trtype": "PCIe", 00:21:03.436 "traddr": "0000:00:11.0" 00:21:03.436 }, 00:21:03.436 "ctrlr_data": { 00:21:03.436 "cntlid": 0, 00:21:03.436 "vendor_id": "0x1b36", 00:21:03.436 "model_number": "QEMU NVMe Ctrl", 00:21:03.436 "serial_number": "12341", 00:21:03.436 "firmware_revision": "8.0.0", 00:21:03.436 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:03.436 "oacs": { 00:21:03.436 "security": 0, 00:21:03.436 "format": 1, 00:21:03.436 "firmware": 0, 00:21:03.436 "ns_manage": 1 00:21:03.436 }, 00:21:03.436 "multi_ctrlr": false, 00:21:03.436 "ana_reporting": false 00:21:03.436 }, 00:21:03.436 "vs": { 00:21:03.436 "nvme_version": "1.4" 00:21:03.436 }, 00:21:03.436 "ns_data": { 00:21:03.436 "id": 1, 00:21:03.436 "can_share": false 00:21:03.436 } 00:21:03.436 } 00:21:03.436 ], 00:21:03.436 "mp_policy": "active_passive" 00:21:03.436 } 00:21:03.436 } 00:21:03.436 ]' 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:03.436 07:52:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:03.436 07:52:53 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:03.436 07:52:53 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:03.436 07:52:53 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:03.436 07:52:53 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.436 07:52:53 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:03.698 07:52:53 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=63f142c5-da60-41a5-8119-e6e805144b63 00:21:03.698 07:52:53 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:03.698 07:52:53 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63f142c5-da60-41a5-8119-e6e805144b63 00:21:03.960 07:52:53 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:04.221 07:52:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=aec35b25-6979-4ca1-a6c1-137abafd51c7 00:21:04.221 07:52:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aec35b25-6979-4ca1-a6c1-137abafd51c7 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:04.221 07:52:54 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.221 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.221 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.221 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:04.221 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:04.221 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.482 { 00:21:04.482 "name": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:04.482 "aliases": [ 00:21:04.482 "lvs/nvme0n1p0" 00:21:04.482 ], 00:21:04.482 "product_name": "Logical Volume", 00:21:04.482 "block_size": 4096, 00:21:04.482 "num_blocks": 26476544, 00:21:04.482 "uuid": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:04.482 "assigned_rate_limits": { 00:21:04.482 "rw_ios_per_sec": 0, 00:21:04.482 "rw_mbytes_per_sec": 0, 00:21:04.482 "r_mbytes_per_sec": 0, 00:21:04.482 "w_mbytes_per_sec": 0 00:21:04.482 }, 00:21:04.482 "claimed": false, 00:21:04.482 "zoned": false, 00:21:04.482 "supported_io_types": { 00:21:04.482 "read": true, 00:21:04.482 "write": true, 00:21:04.482 "unmap": true, 00:21:04.482 "flush": false, 00:21:04.482 "reset": true, 00:21:04.482 "nvme_admin": false, 00:21:04.482 "nvme_io": false, 00:21:04.482 "nvme_io_md": false, 00:21:04.482 "write_zeroes": true, 00:21:04.482 "zcopy": false, 00:21:04.482 "get_zone_info": false, 00:21:04.482 "zone_management": false, 00:21:04.482 "zone_append": false, 00:21:04.482 "compare": false, 00:21:04.482 "compare_and_write": false, 00:21:04.482 "abort": false, 00:21:04.482 "seek_hole": true, 00:21:04.482 "seek_data": true, 00:21:04.482 "copy": false, 00:21:04.482 "nvme_iov_md": false 00:21:04.482 }, 00:21:04.482 "driver_specific": { 00:21:04.482 "lvol": { 00:21:04.482 "lvol_store_uuid": "aec35b25-6979-4ca1-a6c1-137abafd51c7", 00:21:04.482 "base_bdev": "nvme0n1", 00:21:04.482 "thin_provision": true, 00:21:04.482 "num_allocated_clusters": 0, 00:21:04.482 "snapshot": false, 00:21:04.482 "clone": false, 00:21:04.482 "esnap_clone": false 00:21:04.482 } 00:21:04.482 } 00:21:04.482 } 00:21:04.482 ]' 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.482 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.482 07:52:54 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:04.482 07:52:54 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:04.482 07:52:54 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:04.744 07:52:54 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:04.744 07:52:54 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:04.744 07:52:54 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.744 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:04.744 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.744 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:04.744 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:04.744 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:05.003 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.003 { 00:21:05.003 "name": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:05.003 "aliases": [ 00:21:05.003 "lvs/nvme0n1p0" 00:21:05.003 ], 00:21:05.003 "product_name": "Logical Volume", 00:21:05.003 "block_size": 4096, 00:21:05.003 "num_blocks": 26476544, 00:21:05.003 "uuid": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:05.003 "assigned_rate_limits": { 00:21:05.003 "rw_ios_per_sec": 0, 00:21:05.003 "rw_mbytes_per_sec": 0, 00:21:05.003 "r_mbytes_per_sec": 0, 00:21:05.003 "w_mbytes_per_sec": 0 00:21:05.003 }, 00:21:05.003 "claimed": false, 00:21:05.003 "zoned": false, 00:21:05.003 "supported_io_types": { 00:21:05.003 "read": true, 00:21:05.003 "write": true, 00:21:05.003 "unmap": true, 00:21:05.003 "flush": false, 00:21:05.003 "reset": true, 00:21:05.003 "nvme_admin": false, 00:21:05.003 "nvme_io": false, 00:21:05.003 "nvme_io_md": false, 00:21:05.003 "write_zeroes": true, 00:21:05.003 "zcopy": false, 00:21:05.003 "get_zone_info": false, 00:21:05.003 "zone_management": false, 00:21:05.003 "zone_append": false, 00:21:05.003 "compare": false, 00:21:05.003 "compare_and_write": false, 00:21:05.003 "abort": false, 00:21:05.003 "seek_hole": true, 00:21:05.003 "seek_data": true, 00:21:05.003 "copy": false, 00:21:05.003 "nvme_iov_md": false 00:21:05.003 }, 00:21:05.003 "driver_specific": { 00:21:05.003 "lvol": { 00:21:05.003 "lvol_store_uuid": "aec35b25-6979-4ca1-a6c1-137abafd51c7", 00:21:05.003 "base_bdev": "nvme0n1", 00:21:05.003 "thin_provision": true, 00:21:05.003 "num_allocated_clusters": 0, 00:21:05.003 "snapshot": false, 00:21:05.003 "clone": false, 00:21:05.003 "esnap_clone": false 00:21:05.003 } 00:21:05.003 } 00:21:05.003 } 00:21:05.003 ]' 00:21:05.003 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.003 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.003 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.262 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.262 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.262 07:52:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.262 07:52:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:05.262 07:52:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:05.262 07:52:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:05.262 07:52:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:05.262 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:05.262 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:05.262 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:05.262 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:05.262 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 817db7c8-5c55-40aa-9b81-b2aef0843776 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.520 { 00:21:05.520 "name": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:05.520 "aliases": [ 00:21:05.520 "lvs/nvme0n1p0" 00:21:05.520 ], 00:21:05.520 "product_name": "Logical Volume", 00:21:05.520 "block_size": 4096, 00:21:05.520 "num_blocks": 26476544, 00:21:05.520 "uuid": "817db7c8-5c55-40aa-9b81-b2aef0843776", 00:21:05.520 "assigned_rate_limits": { 00:21:05.520 "rw_ios_per_sec": 0, 00:21:05.520 "rw_mbytes_per_sec": 0, 00:21:05.520 "r_mbytes_per_sec": 0, 00:21:05.520 "w_mbytes_per_sec": 0 00:21:05.520 }, 00:21:05.520 "claimed": false, 00:21:05.520 "zoned": false, 00:21:05.520 "supported_io_types": { 00:21:05.520 "read": true, 00:21:05.520 "write": true, 00:21:05.520 "unmap": true, 00:21:05.520 "flush": false, 00:21:05.520 "reset": true, 00:21:05.520 "nvme_admin": false, 00:21:05.520 "nvme_io": false, 00:21:05.520 "nvme_io_md": false, 00:21:05.520 "write_zeroes": true, 00:21:05.520 "zcopy": false, 00:21:05.520 "get_zone_info": false, 00:21:05.520 "zone_management": false, 00:21:05.520 "zone_append": false, 00:21:05.520 "compare": false, 00:21:05.520 "compare_and_write": false, 00:21:05.520 "abort": false, 00:21:05.520 "seek_hole": true, 00:21:05.520 "seek_data": true, 00:21:05.520 "copy": false, 00:21:05.520 "nvme_iov_md": false 00:21:05.520 }, 00:21:05.520 "driver_specific": { 00:21:05.520 "lvol": { 00:21:05.520 "lvol_store_uuid": "aec35b25-6979-4ca1-a6c1-137abafd51c7", 00:21:05.520 "base_bdev": "nvme0n1", 00:21:05.520 "thin_provision": true, 00:21:05.520 "num_allocated_clusters": 0, 00:21:05.520 "snapshot": false, 00:21:05.520 "clone": false, 00:21:05.520 "esnap_clone": false 00:21:05.520 } 00:21:05.520 } 00:21:05.520 } 00:21:05.520 ]' 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.520 07:52:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 817db7c8-5c55-40aa-9b81-b2aef0843776 --l2p_dram_limit 10' 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:05.520 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:05.520 07:52:55 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 817db7c8-5c55-40aa-9b81-b2aef0843776 --l2p_dram_limit 10 -c nvc0n1p0 00:21:05.810 [2024-11-29 07:52:55.600002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.600042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.810 [2024-11-29 07:52:55.600054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:05.810 [2024-11-29 07:52:55.600061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.600112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.600120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.810 [2024-11-29 07:52:55.600128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:05.810 [2024-11-29 07:52:55.600134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.600156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.810 [2024-11-29 07:52:55.600758] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.810 [2024-11-29 07:52:55.600775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.600782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.810 [2024-11-29 07:52:55.600790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:21:05.810 [2024-11-29 07:52:55.600796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.600823] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:21:05.810 [2024-11-29 07:52:55.601800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.601822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:05.810 [2024-11-29 07:52:55.601831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:05.810 [2024-11-29 07:52:55.601840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.606923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.606963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.810 [2024-11-29 07:52:55.606973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:21:05.810 [2024-11-29 07:52:55.606980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.607049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.607059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.810 [2024-11-29 07:52:55.607065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:05.810 [2024-11-29 07:52:55.607075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.810 [2024-11-29 07:52:55.607120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.810 [2024-11-29 07:52:55.607129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.811 [2024-11-29 07:52:55.607137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:05.811 [2024-11-29 07:52:55.607144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.811 [2024-11-29 07:52:55.607162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.811 [2024-11-29 07:52:55.610063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.811 [2024-11-29 07:52:55.610085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.811 [2024-11-29 07:52:55.610095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:21:05.811 [2024-11-29 07:52:55.610101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.811 [2024-11-29 07:52:55.610127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.811 [2024-11-29 07:52:55.610133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.811 [2024-11-29 07:52:55.610141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:05.811 [2024-11-29 07:52:55.610146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.811 [2024-11-29 07:52:55.610168] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:05.811 [2024-11-29 07:52:55.610275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.811 [2024-11-29 07:52:55.610291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.811 [2024-11-29 07:52:55.610299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:05.811 [2024-11-29 07:52:55.610308] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610315] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610322] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:05.811 [2024-11-29 07:52:55.610330] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.811 [2024-11-29 07:52:55.610336] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.811 [2024-11-29 07:52:55.610342] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.811 [2024-11-29 07:52:55.610350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.811 [2024-11-29 07:52:55.610361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.811 [2024-11-29 07:52:55.610368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:05.811 [2024-11-29 07:52:55.610373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.811 [2024-11-29 07:52:55.610439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.811 [2024-11-29 07:52:55.610454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.811 [2024-11-29 07:52:55.610462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:05.811 [2024-11-29 07:52:55.610467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.811 [2024-11-29 07:52:55.610547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.811 [2024-11-29 07:52:55.610554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.811 [2024-11-29 07:52:55.610561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.811 [2024-11-29 07:52:55.610579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.811 [2024-11-29 07:52:55.610598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.811 [2024-11-29 07:52:55.610609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.811 [2024-11-29 07:52:55.610613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:05.811 [2024-11-29 07:52:55.610621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.811 [2024-11-29 07:52:55.610626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.811 [2024-11-29 07:52:55.610633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:05.811 [2024-11-29 07:52:55.610638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.811 [2024-11-29 07:52:55.610650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.811 [2024-11-29 07:52:55.610669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.811 [2024-11-29 07:52:55.610686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.811 [2024-11-29 07:52:55.610703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.811 [2024-11-29 07:52:55.610719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.811 [2024-11-29 07:52:55.610737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.811 [2024-11-29 07:52:55.610748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.811 [2024-11-29 07:52:55.610753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:05.811 [2024-11-29 07:52:55.610759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.811 [2024-11-29 07:52:55.610764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.811 [2024-11-29 07:52:55.610770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:05.811 [2024-11-29 07:52:55.610775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.811 [2024-11-29 07:52:55.610786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:05.811 [2024-11-29 07:52:55.610792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.811 [2024-11-29 07:52:55.610805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.811 [2024-11-29 07:52:55.610810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.811 [2024-11-29 07:52:55.610824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.811 [2024-11-29 07:52:55.610831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.811 [2024-11-29 07:52:55.610837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.811 [2024-11-29 07:52:55.610843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.811 [2024-11-29 07:52:55.610848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.811 [2024-11-29 07:52:55.610854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.811 [2024-11-29 07:52:55.610861] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.811 [2024-11-29 07:52:55.610871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.811 [2024-11-29 07:52:55.610878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:05.811 [2024-11-29 07:52:55.610885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:05.811 [2024-11-29 07:52:55.610890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:05.811 [2024-11-29 07:52:55.610897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:05.811 [2024-11-29 07:52:55.610902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:05.811 [2024-11-29 07:52:55.610908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:05.811 [2024-11-29 07:52:55.610914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:05.811 [2024-11-29 07:52:55.610920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:05.811 [2024-11-29 07:52:55.610926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:05.811 [2024-11-29 07:52:55.610934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:05.811 [2024-11-29 07:52:55.610939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:05.811 [2024-11-29 07:52:55.610946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:05.811 [2024-11-29 07:52:55.610951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:05.811 [2024-11-29 07:52:55.610959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:05.811 [2024-11-29 07:52:55.610965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.812 [2024-11-29 07:52:55.610972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.812 [2024-11-29 07:52:55.610978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.812 [2024-11-29 07:52:55.610986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.812 [2024-11-29 07:52:55.610992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.812 [2024-11-29 07:52:55.610999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.812 [2024-11-29 07:52:55.611004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.812 [2024-11-29 07:52:55.611011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.812 [2024-11-29 07:52:55.611017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:21:05.812 [2024-11-29 07:52:55.611023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.812 [2024-11-29 07:52:55.611063] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:05.812 [2024-11-29 07:52:55.611078] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:09.148 [2024-11-29 07:52:59.070684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.148 [2024-11-29 07:52:59.070792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:09.148 [2024-11-29 07:52:59.070812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3459.603 ms 00:21:09.148 [2024-11-29 07:52:59.070825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.108306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.108378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.410 [2024-11-29 07:52:59.108394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.214 ms 00:21:09.410 [2024-11-29 07:52:59.108407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.108571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.108588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.410 [2024-11-29 07:52:59.108599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:09.410 [2024-11-29 07:52:59.108619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.148422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.148486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.410 [2024-11-29 07:52:59.148500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.749 ms 00:21:09.410 [2024-11-29 07:52:59.148512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.148555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.148567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.410 [2024-11-29 07:52:59.148577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:09.410 [2024-11-29 07:52:59.148599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.149338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.149384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.410 [2024-11-29 07:52:59.149397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:21:09.410 [2024-11-29 07:52:59.149409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.149548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.149566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.410 [2024-11-29 07:52:59.149576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:09.410 [2024-11-29 07:52:59.149589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.170122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.170186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.410 [2024-11-29 07:52:59.170198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.512 ms 00:21:09.410 [2024-11-29 07:52:59.170209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.196921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:09.410 [2024-11-29 07:52:59.202369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.202410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:09.410 [2024-11-29 07:52:59.202427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.065 ms 00:21:09.410 [2024-11-29 07:52:59.202436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.302785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.302831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:09.410 [2024-11-29 07:52:59.302850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.284 ms 00:21:09.410 [2024-11-29 07:52:59.302860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.410 [2024-11-29 07:52:59.303087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.410 [2024-11-29 07:52:59.303101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:09.410 [2024-11-29 07:52:59.303117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:09.410 [2024-11-29 07:52:59.303126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.411 [2024-11-29 07:52:59.329620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.411 [2024-11-29 07:52:59.329664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:09.411 [2024-11-29 07:52:59.329681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 00:21:09.411 [2024-11-29 07:52:59.329691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.355155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.355197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:09.673 [2024-11-29 07:52:59.355213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.403 ms 00:21:09.673 [2024-11-29 07:52:59.355221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.355878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.355903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:09.673 [2024-11-29 07:52:59.355920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:21:09.673 [2024-11-29 07:52:59.355929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.453270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.453324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:09.673 [2024-11-29 07:52:59.453346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.290 ms 00:21:09.673 [2024-11-29 07:52:59.453357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.482026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.482072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:09.673 [2024-11-29 07:52:59.482088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.550 ms 00:21:09.673 [2024-11-29 07:52:59.482098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.507839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.507881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:09.673 [2024-11-29 07:52:59.507896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:21:09.673 [2024-11-29 07:52:59.507906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.533998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.534039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.673 [2024-11-29 07:52:59.534055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.037 ms 00:21:09.673 [2024-11-29 07:52:59.534063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.534120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.534131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.673 [2024-11-29 07:52:59.534148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:09.673 [2024-11-29 07:52:59.534156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.534256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.673 [2024-11-29 07:52:59.534273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.673 [2024-11-29 07:52:59.534284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:09.673 [2024-11-29 07:52:59.534292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.673 [2024-11-29 07:52:59.536528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3935.901 ms, result 0 00:21:09.673 { 00:21:09.673 "name": "ftl0", 00:21:09.673 "uuid": "3f1401f1-4ff6-49ff-b356-dcb7b33876dc" 00:21:09.673 } 00:21:09.673 07:52:59 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:09.673 07:52:59 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:09.935 07:52:59 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:09.935 07:52:59 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:10.199 [2024-11-29 07:52:59.974843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.199 [2024-11-29 07:52:59.974903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:10.199 [2024-11-29 07:52:59.974917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:10.199 [2024-11-29 07:52:59.974929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.199 [2024-11-29 07:52:59.974956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:10.200 [2024-11-29 07:52:59.978355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:52:59.978391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:10.200 [2024-11-29 07:52:59.978405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.376 ms 00:21:10.200 [2024-11-29 07:52:59.978415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:52:59.978738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:52:59.978753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:10.200 [2024-11-29 07:52:59.978766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:10.200 [2024-11-29 07:52:59.978775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:52:59.982052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:52:59.982076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:10.200 [2024-11-29 07:52:59.982090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:21:10.200 [2024-11-29 07:52:59.982099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:52:59.988329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:52:59.988366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:10.200 [2024-11-29 07:52:59.988380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.206 ms 00:21:10.200 [2024-11-29 07:52:59.988389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.013659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.013702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:10.200 [2024-11-29 07:53:00.013718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.207 ms 00:21:10.200 [2024-11-29 07:53:00.013726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.032740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.032789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:10.200 [2024-11-29 07:53:00.032807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.954 ms 00:21:10.200 [2024-11-29 07:53:00.032817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.033023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.033038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:10.200 [2024-11-29 07:53:00.033051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:10.200 [2024-11-29 07:53:00.033060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.058939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.058987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:10.200 [2024-11-29 07:53:00.059004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.852 ms 00:21:10.200 [2024-11-29 07:53:00.059013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.085119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.085167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:10.200 [2024-11-29 07:53:00.085183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.050 ms 00:21:10.200 [2024-11-29 07:53:00.085191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.109636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.109696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:10.200 [2024-11-29 07:53:00.109711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.382 ms 00:21:10.200 [2024-11-29 07:53:00.109720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.134782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.200 [2024-11-29 07:53:00.134823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:10.200 [2024-11-29 07:53:00.134839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.956 ms 00:21:10.200 [2024-11-29 07:53:00.134848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.200 [2024-11-29 07:53:00.134901] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:10.200 [2024-11-29 07:53:00.134919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.134988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:10.200 [2024-11-29 07:53:00.135424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:10.201 [2024-11-29 07:53:00.135918] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:10.201 [2024-11-29 07:53:00.135928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:21:10.201 [2024-11-29 07:53:00.135937] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:10.201 [2024-11-29 07:53:00.135949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:10.201 [2024-11-29 07:53:00.135961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:10.201 [2024-11-29 07:53:00.135972] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:10.201 [2024-11-29 07:53:00.135980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:10.201 [2024-11-29 07:53:00.135990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:10.201 [2024-11-29 07:53:00.135998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:10.201 [2024-11-29 07:53:00.136008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:10.201 [2024-11-29 07:53:00.136014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:10.201 [2024-11-29 07:53:00.136024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.201 [2024-11-29 07:53:00.136032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:10.201 [2024-11-29 07:53:00.136043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:21:10.201 [2024-11-29 07:53:00.136054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.463 [2024-11-29 07:53:00.151549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.463 [2024-11-29 07:53:00.151587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:10.463 [2024-11-29 07:53:00.151601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.425 ms 00:21:10.464 [2024-11-29 07:53:00.151611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.152055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.464 [2024-11-29 07:53:00.152077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:10.464 [2024-11-29 07:53:00.152094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:10.464 [2024-11-29 07:53:00.152102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.202150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.202194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.464 [2024-11-29 07:53:00.202209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.202219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.202307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.202317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.464 [2024-11-29 07:53:00.202333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.202342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.202435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.202472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.464 [2024-11-29 07:53:00.202485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.202494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.202521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.202532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.464 [2024-11-29 07:53:00.202543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.202556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.294593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.294650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.464 [2024-11-29 07:53:00.294667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.294676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.464 [2024-11-29 07:53:00.369176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.464 [2024-11-29 07:53:00.369348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.464 [2024-11-29 07:53:00.369439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.464 [2024-11-29 07:53:00.369614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:10.464 [2024-11-29 07:53:00.369695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.464 [2024-11-29 07:53:00.369784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.369856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.464 [2024-11-29 07:53:00.369879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.464 [2024-11-29 07:53:00.369891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.464 [2024-11-29 07:53:00.369899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.464 [2024-11-29 07:53:00.370087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.183 ms, result 0 00:21:10.464 true 00:21:10.464 07:53:00 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77289 00:21:10.464 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77289 ']' 00:21:10.464 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77289 00:21:10.464 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:10.464 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.464 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77289 00:21:10.725 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.725 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.725 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77289' 00:21:10.725 killing process with pid 77289 00:21:10.725 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77289 00:21:10.725 07:53:00 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77289 00:21:17.303 07:53:07 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:21.509 262144+0 records in 00:21:21.509 262144+0 records out 00:21:21.509 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.0916 s, 262 MB/s 00:21:21.509 07:53:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:24.060 07:53:13 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.060 [2024-11-29 07:53:13.493877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:24.060 [2024-11-29 07:53:13.493966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77521 ] 00:21:24.060 [2024-11-29 07:53:13.647318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.060 [2024-11-29 07:53:13.742488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.060 [2024-11-29 07:53:14.002792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.060 [2024-11-29 07:53:14.002864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.323 [2024-11-29 07:53:14.160283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.160333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.323 [2024-11-29 07:53:14.160346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:24.323 [2024-11-29 07:53:14.160354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.160400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.160413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.323 [2024-11-29 07:53:14.160421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:24.323 [2024-11-29 07:53:14.160428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.160461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.323 [2024-11-29 07:53:14.161187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.323 [2024-11-29 07:53:14.161210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.161217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.323 [2024-11-29 07:53:14.161226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:21:24.323 [2024-11-29 07:53:14.161233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.162331] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.323 [2024-11-29 07:53:14.175180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.175216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.323 [2024-11-29 07:53:14.175228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.851 ms 00:21:24.323 [2024-11-29 07:53:14.175236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.175301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.175310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.323 [2024-11-29 07:53:14.175318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:24.323 [2024-11-29 07:53:14.175326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.180480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.180507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.323 [2024-11-29 07:53:14.180517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:21:24.323 [2024-11-29 07:53:14.180527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.180600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.180608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.323 [2024-11-29 07:53:14.180616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:24.323 [2024-11-29 07:53:14.180623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.180670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.180679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.323 [2024-11-29 07:53:14.180687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:24.323 [2024-11-29 07:53:14.180694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.180717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.323 [2024-11-29 07:53:14.183959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.183986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.323 [2024-11-29 07:53:14.183997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:21:24.323 [2024-11-29 07:53:14.184004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.323 [2024-11-29 07:53:14.184032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.323 [2024-11-29 07:53:14.184039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.323 [2024-11-29 07:53:14.184047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:24.324 [2024-11-29 07:53:14.184054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.324 [2024-11-29 07:53:14.184072] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.324 [2024-11-29 07:53:14.184089] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.324 [2024-11-29 07:53:14.184123] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.324 [2024-11-29 07:53:14.184140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.324 [2024-11-29 07:53:14.184241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.324 [2024-11-29 07:53:14.184251] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.324 [2024-11-29 07:53:14.184261] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.324 [2024-11-29 07:53:14.184270] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184279] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184287] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:24.324 [2024-11-29 07:53:14.184294] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.324 [2024-11-29 07:53:14.184304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.324 [2024-11-29 07:53:14.184311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.324 [2024-11-29 07:53:14.184318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.324 [2024-11-29 07:53:14.184325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.324 [2024-11-29 07:53:14.184332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:24.324 [2024-11-29 07:53:14.184339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.324 [2024-11-29 07:53:14.184420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.324 [2024-11-29 07:53:14.184428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.324 [2024-11-29 07:53:14.184435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:24.324 [2024-11-29 07:53:14.184460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.324 [2024-11-29 07:53:14.184565] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.324 [2024-11-29 07:53:14.184575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.324 [2024-11-29 07:53:14.184583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.324 [2024-11-29 07:53:14.184605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.324 [2024-11-29 07:53:14.184625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.324 [2024-11-29 07:53:14.184639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.324 [2024-11-29 07:53:14.184647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:24.324 [2024-11-29 07:53:14.184653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.324 [2024-11-29 07:53:14.184665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.324 [2024-11-29 07:53:14.184672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:24.324 [2024-11-29 07:53:14.184678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.324 [2024-11-29 07:53:14.184691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.324 [2024-11-29 07:53:14.184711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.324 [2024-11-29 07:53:14.184730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.324 [2024-11-29 07:53:14.184749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.324 [2024-11-29 07:53:14.184769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.324 [2024-11-29 07:53:14.184788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.324 [2024-11-29 07:53:14.184800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.324 [2024-11-29 07:53:14.184807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:24.324 [2024-11-29 07:53:14.184812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.324 [2024-11-29 07:53:14.184819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.324 [2024-11-29 07:53:14.184825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:24.324 [2024-11-29 07:53:14.184831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.324 [2024-11-29 07:53:14.184844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:24.324 [2024-11-29 07:53:14.184859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184867] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.324 [2024-11-29 07:53:14.184875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.324 [2024-11-29 07:53:14.184883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.324 [2024-11-29 07:53:14.184897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.324 [2024-11-29 07:53:14.184904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.324 [2024-11-29 07:53:14.184911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.324 [2024-11-29 07:53:14.184918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.324 [2024-11-29 07:53:14.184924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.324 [2024-11-29 07:53:14.184930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.324 [2024-11-29 07:53:14.184938] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.324 [2024-11-29 07:53:14.184947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.184958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:24.324 [2024-11-29 07:53:14.184965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:24.324 [2024-11-29 07:53:14.184972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:24.324 [2024-11-29 07:53:14.184979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:24.324 [2024-11-29 07:53:14.184986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:24.324 [2024-11-29 07:53:14.184993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:24.324 [2024-11-29 07:53:14.185000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:24.324 [2024-11-29 07:53:14.185007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:24.324 [2024-11-29 07:53:14.185014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:24.324 [2024-11-29 07:53:14.185021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:24.324 [2024-11-29 07:53:14.185056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.324 [2024-11-29 07:53:14.185064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.324 [2024-11-29 07:53:14.185078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.324 [2024-11-29 07:53:14.185085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.324 [2024-11-29 07:53:14.185092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.324 [2024-11-29 07:53:14.185100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.185108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.325 [2024-11-29 07:53:14.185115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:21:24.325 [2024-11-29 07:53:14.185121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.211348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.211383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.325 [2024-11-29 07:53:14.211393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:21:24.325 [2024-11-29 07:53:14.211404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.211495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.211504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.325 [2024-11-29 07:53:14.211511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:24.325 [2024-11-29 07:53:14.211519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.256247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.256289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.325 [2024-11-29 07:53:14.256301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.679 ms 00:21:24.325 [2024-11-29 07:53:14.256309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.256349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.256358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.325 [2024-11-29 07:53:14.256370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.325 [2024-11-29 07:53:14.256377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.256764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.256790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.325 [2024-11-29 07:53:14.256799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:21:24.325 [2024-11-29 07:53:14.256806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.325 [2024-11-29 07:53:14.256940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.325 [2024-11-29 07:53:14.256949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.325 [2024-11-29 07:53:14.256960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:21:24.325 [2024-11-29 07:53:14.256967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.270176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.270214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.584 [2024-11-29 07:53:14.270224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.189 ms 00:21:24.584 [2024-11-29 07:53:14.270231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.283116] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:24.584 [2024-11-29 07:53:14.283157] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.584 [2024-11-29 07:53:14.283169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.283177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.584 [2024-11-29 07:53:14.283186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:21:24.584 [2024-11-29 07:53:14.283193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.307791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.307833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.584 [2024-11-29 07:53:14.307843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.561 ms 00:21:24.584 [2024-11-29 07:53:14.307851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.320072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.320103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.584 [2024-11-29 07:53:14.320113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.181 ms 00:21:24.584 [2024-11-29 07:53:14.320120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.331951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.331982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.584 [2024-11-29 07:53:14.331992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:21:24.584 [2024-11-29 07:53:14.331999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.332608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.332632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.584 [2024-11-29 07:53:14.332642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:21:24.584 [2024-11-29 07:53:14.332652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.388468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.584 [2024-11-29 07:53:14.388517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.584 [2024-11-29 07:53:14.388529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.798 ms 00:21:24.584 [2024-11-29 07:53:14.388542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.584 [2024-11-29 07:53:14.398771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.585 [2024-11-29 07:53:14.401062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.401093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.585 [2024-11-29 07:53:14.401105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.479 ms 00:21:24.585 [2024-11-29 07:53:14.401114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.401196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.401208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.585 [2024-11-29 07:53:14.401218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:24.585 [2024-11-29 07:53:14.401227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.401292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.401302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.585 [2024-11-29 07:53:14.401311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:24.585 [2024-11-29 07:53:14.401317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.401336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.401344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.585 [2024-11-29 07:53:14.401352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:24.585 [2024-11-29 07:53:14.401360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.401389] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.585 [2024-11-29 07:53:14.401399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.401407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.585 [2024-11-29 07:53:14.401415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:24.585 [2024-11-29 07:53:14.401422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.425262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.425302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.585 [2024-11-29 07:53:14.425314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.823 ms 00:21:24.585 [2024-11-29 07:53:14.425327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.425400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.585 [2024-11-29 07:53:14.425409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.585 [2024-11-29 07:53:14.425418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:24.585 [2024-11-29 07:53:14.425425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.585 [2024-11-29 07:53:14.426494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.768 ms, result 0 00:21:25.527  [2024-11-29T07:53:16.858Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T07:53:17.802Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-29T07:53:18.746Z] Copying: 52/1024 [MB] (19 MBps) [2024-11-29T07:53:19.685Z] Copying: 68/1024 [MB] (15 MBps) [2024-11-29T07:53:20.616Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-29T07:53:21.553Z] Copying: 102/1024 [MB] (22 MBps) [2024-11-29T07:53:22.497Z] Copying: 147/1024 [MB] (45 MBps) [2024-11-29T07:53:23.878Z] Copying: 164/1024 [MB] (16 MBps) [2024-11-29T07:53:24.451Z] Copying: 210/1024 [MB] (46 MBps) [2024-11-29T07:53:25.834Z] Copying: 225/1024 [MB] (14 MBps) [2024-11-29T07:53:26.775Z] Copying: 243/1024 [MB] (18 MBps) [2024-11-29T07:53:27.710Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-29T07:53:28.649Z] Copying: 278/1024 [MB] (23 MBps) [2024-11-29T07:53:29.594Z] Copying: 305/1024 [MB] (27 MBps) [2024-11-29T07:53:30.537Z] Copying: 318/1024 [MB] (13 MBps) [2024-11-29T07:53:31.479Z] Copying: 332/1024 [MB] (13 MBps) [2024-11-29T07:53:32.859Z] Copying: 345/1024 [MB] (12 MBps) [2024-11-29T07:53:33.794Z] Copying: 357/1024 [MB] (12 MBps) [2024-11-29T07:53:34.733Z] Copying: 374/1024 [MB] (16 MBps) [2024-11-29T07:53:35.687Z] Copying: 406/1024 [MB] (32 MBps) [2024-11-29T07:53:36.664Z] Copying: 422/1024 [MB] (15 MBps) [2024-11-29T07:53:37.621Z] Copying: 441/1024 [MB] (19 MBps) [2024-11-29T07:53:38.564Z] Copying: 454/1024 [MB] (12 MBps) [2024-11-29T07:53:39.505Z] Copying: 468/1024 [MB] (13 MBps) [2024-11-29T07:53:40.439Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-29T07:53:41.821Z] Copying: 509/1024 [MB] (31 MBps) [2024-11-29T07:53:42.760Z] Copying: 528/1024 [MB] (19 MBps) [2024-11-29T07:53:43.701Z] Copying: 544/1024 [MB] (15 MBps) [2024-11-29T07:53:44.640Z] Copying: 558/1024 [MB] (13 MBps) [2024-11-29T07:53:45.581Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-29T07:53:46.521Z] Copying: 583/1024 [MB] (15 MBps) [2024-11-29T07:53:47.463Z] Copying: 595/1024 [MB] (12 MBps) [2024-11-29T07:53:48.853Z] Copying: 614/1024 [MB] (18 MBps) [2024-11-29T07:53:49.795Z] Copying: 629/1024 [MB] (14 MBps) [2024-11-29T07:53:50.737Z] Copying: 643/1024 [MB] (14 MBps) [2024-11-29T07:53:51.682Z] Copying: 665/1024 [MB] (21 MBps) [2024-11-29T07:53:52.628Z] Copying: 679/1024 [MB] (13 MBps) [2024-11-29T07:53:53.571Z] Copying: 695/1024 [MB] (16 MBps) [2024-11-29T07:53:54.517Z] Copying: 710/1024 [MB] (15 MBps) [2024-11-29T07:53:55.461Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-29T07:53:56.840Z] Copying: 735/1024 [MB] (12 MBps) [2024-11-29T07:53:57.776Z] Copying: 757/1024 [MB] (21 MBps) [2024-11-29T07:53:58.720Z] Copying: 787/1024 [MB] (30 MBps) [2024-11-29T07:53:59.663Z] Copying: 805/1024 [MB] (17 MBps) [2024-11-29T07:54:00.615Z] Copying: 822/1024 [MB] (16 MBps) [2024-11-29T07:54:01.558Z] Copying: 837/1024 [MB] (15 MBps) [2024-11-29T07:54:02.505Z] Copying: 858/1024 [MB] (21 MBps) [2024-11-29T07:54:03.450Z] Copying: 874/1024 [MB] (16 MBps) [2024-11-29T07:54:04.832Z] Copying: 889/1024 [MB] (14 MBps) [2024-11-29T07:54:05.773Z] Copying: 912/1024 [MB] (23 MBps) [2024-11-29T07:54:06.718Z] Copying: 950/1024 [MB] (37 MBps) [2024-11-29T07:54:07.733Z] Copying: 968/1024 [MB] (18 MBps) [2024-11-29T07:54:08.675Z] Copying: 988/1024 [MB] (19 MBps) [2024-11-29T07:54:09.245Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-29T07:54:09.246Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 07:54:09.069199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.069242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:19.302 [2024-11-29 07:54:09.069257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:19.302 [2024-11-29 07:54:09.069267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.069286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:19.302 [2024-11-29 07:54:09.071844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.071869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:19.302 [2024-11-29 07:54:09.071886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:22:19.302 [2024-11-29 07:54:09.071896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.073358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.073384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:19.302 [2024-11-29 07:54:09.073394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:22:19.302 [2024-11-29 07:54:09.073401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.088305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.088333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:19.302 [2024-11-29 07:54:09.088343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.890 ms 00:22:19.302 [2024-11-29 07:54:09.088351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.094481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.094505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:19.302 [2024-11-29 07:54:09.094516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:22:19.302 [2024-11-29 07:54:09.094524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.118658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.118686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:19.302 [2024-11-29 07:54:09.118696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:22:19.302 [2024-11-29 07:54:09.118703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.132948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.132975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:19.302 [2024-11-29 07:54:09.132986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.214 ms 00:22:19.302 [2024-11-29 07:54:09.132995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.133112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.133126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:19.302 [2024-11-29 07:54:09.133135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:19.302 [2024-11-29 07:54:09.133144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.157032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.157056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:19.302 [2024-11-29 07:54:09.157066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:22:19.302 [2024-11-29 07:54:09.157074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.180018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.180044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:19.302 [2024-11-29 07:54:09.180054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.914 ms 00:22:19.302 [2024-11-29 07:54:09.180061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.202675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.202701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:19.302 [2024-11-29 07:54:09.202711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.584 ms 00:22:19.302 [2024-11-29 07:54:09.202718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.225188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.302 [2024-11-29 07:54:09.225214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:19.302 [2024-11-29 07:54:09.225224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:22:19.302 [2024-11-29 07:54:09.225232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.302 [2024-11-29 07:54:09.225262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:19.302 [2024-11-29 07:54:09.225275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:19.302 [2024-11-29 07:54:09.225636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.225992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:19.303 [2024-11-29 07:54:09.226043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:19.303 [2024-11-29 07:54:09.226053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:22:19.303 [2024-11-29 07:54:09.226061] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:19.303 [2024-11-29 07:54:09.226067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:19.303 [2024-11-29 07:54:09.226074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:19.303 [2024-11-29 07:54:09.226081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:19.303 [2024-11-29 07:54:09.226089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:19.303 [2024-11-29 07:54:09.226101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:19.303 [2024-11-29 07:54:09.226108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:19.303 [2024-11-29 07:54:09.226115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:19.303 [2024-11-29 07:54:09.226123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:19.303 [2024-11-29 07:54:09.226130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.303 [2024-11-29 07:54:09.226137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:19.303 [2024-11-29 07:54:09.226145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:22:19.303 [2024-11-29 07:54:09.226152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.303 [2024-11-29 07:54:09.238570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.303 [2024-11-29 07:54:09.238594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:19.303 [2024-11-29 07:54:09.238605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.400 ms 00:22:19.303 [2024-11-29 07:54:09.238613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.303 [2024-11-29 07:54:09.238951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.303 [2024-11-29 07:54:09.238961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:19.303 [2024-11-29 07:54:09.238969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:19.303 [2024-11-29 07:54:09.238981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.271960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.271987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.565 [2024-11-29 07:54:09.271996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.272003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.272051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.272059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.565 [2024-11-29 07:54:09.272066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.272076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.272122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.272132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.565 [2024-11-29 07:54:09.272140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.272148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.272161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.272168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.565 [2024-11-29 07:54:09.272175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.272182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.348850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.348885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.565 [2024-11-29 07:54:09.348895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.348903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.565 [2024-11-29 07:54:09.411603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.565 [2024-11-29 07:54:09.411696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.565 [2024-11-29 07:54:09.411752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.565 [2024-11-29 07:54:09.411863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:19.565 [2024-11-29 07:54:09.411914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.411953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.411964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.565 [2024-11-29 07:54:09.411972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.411980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.412018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.565 [2024-11-29 07:54:09.412028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.565 [2024-11-29 07:54:09.412035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.565 [2024-11-29 07:54:09.412042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.565 [2024-11-29 07:54:09.412152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.923 ms, result 0 00:22:20.510 00:22:20.510 00:22:20.510 07:54:10 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:20.510 [2024-11-29 07:54:10.431505] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:20.510 [2024-11-29 07:54:10.431629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78109 ] 00:22:20.772 [2024-11-29 07:54:10.591706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.772 [2024-11-29 07:54:10.707986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.347 [2024-11-29 07:54:10.985745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.347 [2024-11-29 07:54:10.985821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.347 [2024-11-29 07:54:11.140868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.347 [2024-11-29 07:54:11.140907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.347 [2024-11-29 07:54:11.140920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.347 [2024-11-29 07:54:11.140928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.347 [2024-11-29 07:54:11.140970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.347 [2024-11-29 07:54:11.140981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.347 [2024-11-29 07:54:11.140989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:21.347 [2024-11-29 07:54:11.140997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.347 [2024-11-29 07:54:11.141013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.347 [2024-11-29 07:54:11.141662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.347 [2024-11-29 07:54:11.141678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.347 [2024-11-29 07:54:11.141686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.347 [2024-11-29 07:54:11.141695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:22:21.347 [2024-11-29 07:54:11.141702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.347 [2024-11-29 07:54:11.142799] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:21.347 [2024-11-29 07:54:11.155614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.347 [2024-11-29 07:54:11.155642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:21.347 [2024-11-29 07:54:11.155653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.818 ms 00:22:21.348 [2024-11-29 07:54:11.155661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.155712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.155722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:21.348 [2024-11-29 07:54:11.155730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:21.348 [2024-11-29 07:54:11.155737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.160641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.160664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.348 [2024-11-29 07:54:11.160673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.849 ms 00:22:21.348 [2024-11-29 07:54:11.160684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.160747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.160756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.348 [2024-11-29 07:54:11.160764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:21.348 [2024-11-29 07:54:11.160772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.160811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.160820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.348 [2024-11-29 07:54:11.160850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:21.348 [2024-11-29 07:54:11.160857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.160881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.348 [2024-11-29 07:54:11.164155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.164178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.348 [2024-11-29 07:54:11.164190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:22:21.348 [2024-11-29 07:54:11.164197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.164224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.164233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.348 [2024-11-29 07:54:11.164241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:21.348 [2024-11-29 07:54:11.164249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.164266] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:21.348 [2024-11-29 07:54:11.164284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:21.348 [2024-11-29 07:54:11.164318] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:21.348 [2024-11-29 07:54:11.164336] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:21.348 [2024-11-29 07:54:11.164438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:21.348 [2024-11-29 07:54:11.164459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.348 [2024-11-29 07:54:11.164470] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:21.348 [2024-11-29 07:54:11.164480] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164497] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:21.348 [2024-11-29 07:54:11.164504] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.348 [2024-11-29 07:54:11.164514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:21.348 [2024-11-29 07:54:11.164522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:21.348 [2024-11-29 07:54:11.164529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.164537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.348 [2024-11-29 07:54:11.164544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:22:21.348 [2024-11-29 07:54:11.164551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.164633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.348 [2024-11-29 07:54:11.164642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.348 [2024-11-29 07:54:11.164649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:21.348 [2024-11-29 07:54:11.164656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.348 [2024-11-29 07:54:11.164757] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.348 [2024-11-29 07:54:11.164767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.348 [2024-11-29 07:54:11.164776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.348 [2024-11-29 07:54:11.164799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.348 [2024-11-29 07:54:11.164820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.348 [2024-11-29 07:54:11.164844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.348 [2024-11-29 07:54:11.164851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:21.348 [2024-11-29 07:54:11.164858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.348 [2024-11-29 07:54:11.164871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.348 [2024-11-29 07:54:11.164878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:21.348 [2024-11-29 07:54:11.164884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.348 [2024-11-29 07:54:11.164900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.348 [2024-11-29 07:54:11.164920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.348 [2024-11-29 07:54:11.164940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.348 [2024-11-29 07:54:11.164960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.348 [2024-11-29 07:54:11.164980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:21.348 [2024-11-29 07:54:11.164987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.348 [2024-11-29 07:54:11.164993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.348 [2024-11-29 07:54:11.164999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:21.348 [2024-11-29 07:54:11.165006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.348 [2024-11-29 07:54:11.165012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.348 [2024-11-29 07:54:11.165018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:21.348 [2024-11-29 07:54:11.165025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.348 [2024-11-29 07:54:11.165032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:21.348 [2024-11-29 07:54:11.165039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:21.348 [2024-11-29 07:54:11.165045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.165051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:21.348 [2024-11-29 07:54:11.165058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:21.348 [2024-11-29 07:54:11.165065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.165071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.348 [2024-11-29 07:54:11.165079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.348 [2024-11-29 07:54:11.165087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.348 [2024-11-29 07:54:11.165094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.348 [2024-11-29 07:54:11.165101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.348 [2024-11-29 07:54:11.165108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.348 [2024-11-29 07:54:11.165116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.348 [2024-11-29 07:54:11.165123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.348 [2024-11-29 07:54:11.165130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.348 [2024-11-29 07:54:11.165137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.348 [2024-11-29 07:54:11.165145] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.349 [2024-11-29 07:54:11.165154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:21.349 [2024-11-29 07:54:11.165175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:21.349 [2024-11-29 07:54:11.165182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:21.349 [2024-11-29 07:54:11.165189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:21.349 [2024-11-29 07:54:11.165196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:21.349 [2024-11-29 07:54:11.165204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:21.349 [2024-11-29 07:54:11.165211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:21.349 [2024-11-29 07:54:11.165219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:21.349 [2024-11-29 07:54:11.165226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:21.349 [2024-11-29 07:54:11.165233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:21.349 [2024-11-29 07:54:11.165268] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.349 [2024-11-29 07:54:11.165276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.349 [2024-11-29 07:54:11.165292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.349 [2024-11-29 07:54:11.165299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.349 [2024-11-29 07:54:11.165307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.349 [2024-11-29 07:54:11.165314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.165322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.349 [2024-11-29 07:54:11.165330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:22:21.349 [2024-11-29 07:54:11.165337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.191234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.191261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.349 [2024-11-29 07:54:11.191271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.845 ms 00:22:21.349 [2024-11-29 07:54:11.191282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.191362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.191370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.349 [2024-11-29 07:54:11.191378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:21.349 [2024-11-29 07:54:11.191386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.235801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.235835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.349 [2024-11-29 07:54:11.235847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.367 ms 00:22:21.349 [2024-11-29 07:54:11.235855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.235892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.235902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.349 [2024-11-29 07:54:11.235914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:21.349 [2024-11-29 07:54:11.235921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.236280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.236304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.349 [2024-11-29 07:54:11.236314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:22:21.349 [2024-11-29 07:54:11.236323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.236464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.236475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.349 [2024-11-29 07:54:11.236487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:22:21.349 [2024-11-29 07:54:11.236495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.249504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.249531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.349 [2024-11-29 07:54:11.249541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:22:21.349 [2024-11-29 07:54:11.249549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.262258] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:21.349 [2024-11-29 07:54:11.262287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:21.349 [2024-11-29 07:54:11.262299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.262307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:21.349 [2024-11-29 07:54:11.262316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.664 ms 00:22:21.349 [2024-11-29 07:54:11.262323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.349 [2024-11-29 07:54:11.286960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.349 [2024-11-29 07:54:11.286988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:21.349 [2024-11-29 07:54:11.286999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.602 ms 00:22:21.349 [2024-11-29 07:54:11.287006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.611 [2024-11-29 07:54:11.298755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.611 [2024-11-29 07:54:11.298779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:21.611 [2024-11-29 07:54:11.298789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.705 ms 00:22:21.611 [2024-11-29 07:54:11.298796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.611 [2024-11-29 07:54:11.310082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.611 [2024-11-29 07:54:11.310106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:21.611 [2024-11-29 07:54:11.310116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.255 ms 00:22:21.611 [2024-11-29 07:54:11.310124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.611 [2024-11-29 07:54:11.310723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.611 [2024-11-29 07:54:11.310743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:21.611 [2024-11-29 07:54:11.310755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:22:21.611 [2024-11-29 07:54:11.310763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.611 [2024-11-29 07:54:11.365591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.611 [2024-11-29 07:54:11.365627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.611 [2024-11-29 07:54:11.365643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.812 ms 00:22:21.611 [2024-11-29 07:54:11.365651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.611 [2024-11-29 07:54:11.376117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:21.611 [2024-11-29 07:54:11.378335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.611 [2024-11-29 07:54:11.378359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.612 [2024-11-29 07:54:11.378371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:22:21.612 [2024-11-29 07:54:11.378380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.378470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.378482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.612 [2024-11-29 07:54:11.378494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:21.612 [2024-11-29 07:54:11.378504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.378566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.378577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.612 [2024-11-29 07:54:11.378585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:21.612 [2024-11-29 07:54:11.378592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.378609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.378617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.612 [2024-11-29 07:54:11.378625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.612 [2024-11-29 07:54:11.378632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.378664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.612 [2024-11-29 07:54:11.378675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.378682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.612 [2024-11-29 07:54:11.378689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:21.612 [2024-11-29 07:54:11.378696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.401783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.401812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.612 [2024-11-29 07:54:11.401826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.070 ms 00:22:21.612 [2024-11-29 07:54:11.401834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.401900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.612 [2024-11-29 07:54:11.401910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.612 [2024-11-29 07:54:11.401918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:21.612 [2024-11-29 07:54:11.401925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.612 [2024-11-29 07:54:11.402873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 261.590 ms, result 0 00:22:22.999  [2024-11-29T07:54:13.886Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-29T07:54:14.829Z] Copying: 34/1024 [MB] (13 MBps) [2024-11-29T07:54:15.770Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-29T07:54:16.716Z] Copying: 59/1024 [MB] (14 MBps) [2024-11-29T07:54:17.651Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-29T07:54:18.589Z] Copying: 87/1024 [MB] (17 MBps) [2024-11-29T07:54:19.973Z] Copying: 112/1024 [MB] (25 MBps) [2024-11-29T07:54:20.916Z] Copying: 128/1024 [MB] (16 MBps) [2024-11-29T07:54:21.857Z] Copying: 144/1024 [MB] (15 MBps) [2024-11-29T07:54:22.803Z] Copying: 157/1024 [MB] (13 MBps) [2024-11-29T07:54:23.748Z] Copying: 172/1024 [MB] (15 MBps) [2024-11-29T07:54:24.692Z] Copying: 184/1024 [MB] (12 MBps) [2024-11-29T07:54:25.644Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-29T07:54:26.589Z] Copying: 215/1024 [MB] (19 MBps) [2024-11-29T07:54:27.979Z] Copying: 232/1024 [MB] (16 MBps) [2024-11-29T07:54:28.924Z] Copying: 249/1024 [MB] (17 MBps) [2024-11-29T07:54:29.868Z] Copying: 259/1024 [MB] (10 MBps) [2024-11-29T07:54:30.817Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-29T07:54:31.763Z] Copying: 280/1024 [MB] (10 MBps) [2024-11-29T07:54:32.727Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-29T07:54:33.671Z] Copying: 312/1024 [MB] (22 MBps) [2024-11-29T07:54:34.629Z] Copying: 328/1024 [MB] (15 MBps) [2024-11-29T07:54:36.016Z] Copying: 349/1024 [MB] (21 MBps) [2024-11-29T07:54:36.591Z] Copying: 368/1024 [MB] (18 MBps) [2024-11-29T07:54:37.976Z] Copying: 386/1024 [MB] (18 MBps) [2024-11-29T07:54:38.950Z] Copying: 408/1024 [MB] (21 MBps) [2024-11-29T07:54:39.933Z] Copying: 431/1024 [MB] (23 MBps) [2024-11-29T07:54:40.877Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-29T07:54:41.819Z] Copying: 460/1024 [MB] (17 MBps) [2024-11-29T07:54:42.762Z] Copying: 474/1024 [MB] (14 MBps) [2024-11-29T07:54:43.702Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-29T07:54:44.641Z] Copying: 496/1024 [MB] (11 MBps) [2024-11-29T07:54:45.580Z] Copying: 507/1024 [MB] (11 MBps) [2024-11-29T07:54:46.966Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-29T07:54:47.906Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-29T07:54:48.842Z] Copying: 544/1024 [MB] (14 MBps) [2024-11-29T07:54:49.782Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-29T07:54:50.720Z] Copying: 573/1024 [MB] (19 MBps) [2024-11-29T07:54:51.664Z] Copying: 590/1024 [MB] (16 MBps) [2024-11-29T07:54:52.602Z] Copying: 610/1024 [MB] (19 MBps) [2024-11-29T07:54:53.989Z] Copying: 634/1024 [MB] (24 MBps) [2024-11-29T07:54:54.933Z] Copying: 649/1024 [MB] (14 MBps) [2024-11-29T07:54:55.877Z] Copying: 662/1024 [MB] (13 MBps) [2024-11-29T07:54:56.822Z] Copying: 681/1024 [MB] (19 MBps) [2024-11-29T07:54:57.766Z] Copying: 702/1024 [MB] (20 MBps) [2024-11-29T07:54:58.712Z] Copying: 718/1024 [MB] (16 MBps) [2024-11-29T07:54:59.657Z] Copying: 742/1024 [MB] (23 MBps) [2024-11-29T07:55:00.598Z] Copying: 759/1024 [MB] (16 MBps) [2024-11-29T07:55:01.986Z] Copying: 778/1024 [MB] (18 MBps) [2024-11-29T07:55:02.930Z] Copying: 794/1024 [MB] (16 MBps) [2024-11-29T07:55:03.874Z] Copying: 814/1024 [MB] (19 MBps) [2024-11-29T07:55:04.818Z] Copying: 834/1024 [MB] (20 MBps) [2024-11-29T07:55:05.760Z] Copying: 857/1024 [MB] (22 MBps) [2024-11-29T07:55:06.705Z] Copying: 877/1024 [MB] (20 MBps) [2024-11-29T07:55:07.650Z] Copying: 897/1024 [MB] (19 MBps) [2024-11-29T07:55:08.595Z] Copying: 917/1024 [MB] (19 MBps) [2024-11-29T07:55:09.985Z] Copying: 935/1024 [MB] (18 MBps) [2024-11-29T07:55:10.624Z] Copying: 949/1024 [MB] (14 MBps) [2024-11-29T07:55:11.608Z] Copying: 970/1024 [MB] (20 MBps) [2024-11-29T07:55:12.999Z] Copying: 991/1024 [MB] (20 MBps) [2024-11-29T07:55:13.261Z] Copying: 1013/1024 [MB] (22 MBps) [2024-11-29T07:55:13.832Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 07:55:13.575018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.575078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.888 [2024-11-29 07:55:13.575092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:23.888 [2024-11-29 07:55:13.575099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.575118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.888 [2024-11-29 07:55:13.577483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.577517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.888 [2024-11-29 07:55:13.577527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:23:23.888 [2024-11-29 07:55:13.577534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.577721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.577735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.888 [2024-11-29 07:55:13.577743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:23:23.888 [2024-11-29 07:55:13.577750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.580418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.580432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.888 [2024-11-29 07:55:13.580440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:23:23.888 [2024-11-29 07:55:13.580457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.585827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.585852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.888 [2024-11-29 07:55:13.585861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.357 ms 00:23:23.888 [2024-11-29 07:55:13.585869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.606822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.606852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.888 [2024-11-29 07:55:13.606862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.896 ms 00:23:23.888 [2024-11-29 07:55:13.606868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.618332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.618358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.888 [2024-11-29 07:55:13.618367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.445 ms 00:23:23.888 [2024-11-29 07:55:13.618375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.618480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.618488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.888 [2024-11-29 07:55:13.618495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:23.888 [2024-11-29 07:55:13.618502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.636873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.636897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.888 [2024-11-29 07:55:13.636905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.360 ms 00:23:23.888 [2024-11-29 07:55:13.636911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.654438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.654466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.888 [2024-11-29 07:55:13.654474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.511 ms 00:23:23.888 [2024-11-29 07:55:13.654479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.671468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.671489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.888 [2024-11-29 07:55:13.671497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.975 ms 00:23:23.888 [2024-11-29 07:55:13.671503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.688628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.888 [2024-11-29 07:55:13.688650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.888 [2024-11-29 07:55:13.688658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.094 ms 00:23:23.888 [2024-11-29 07:55:13.688663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.888 [2024-11-29 07:55:13.688677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.888 [2024-11-29 07:55:13.688692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.888 [2024-11-29 07:55:13.688781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.688999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.889 [2024-11-29 07:55:13.689291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.889 [2024-11-29 07:55:13.689296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:23:23.889 [2024-11-29 07:55:13.689302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:23.889 [2024-11-29 07:55:13.689307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:23.889 [2024-11-29 07:55:13.689312] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:23.889 [2024-11-29 07:55:13.689318] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:23.890 [2024-11-29 07:55:13.689328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.890 [2024-11-29 07:55:13.689334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.890 [2024-11-29 07:55:13.689339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.890 [2024-11-29 07:55:13.689345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.890 [2024-11-29 07:55:13.689350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.890 [2024-11-29 07:55:13.689355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.890 [2024-11-29 07:55:13.689361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.890 [2024-11-29 07:55:13.689367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:23:23.890 [2024-11-29 07:55:13.689375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.698906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.890 [2024-11-29 07:55:13.698927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.890 [2024-11-29 07:55:13.698934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.519 ms 00:23:23.890 [2024-11-29 07:55:13.698940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.699201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.890 [2024-11-29 07:55:13.699208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.890 [2024-11-29 07:55:13.699219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:23:23.890 [2024-11-29 07:55:13.699224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.725184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.890 [2024-11-29 07:55:13.725207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.890 [2024-11-29 07:55:13.725214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.890 [2024-11-29 07:55:13.725220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.725265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.890 [2024-11-29 07:55:13.725271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.890 [2024-11-29 07:55:13.725279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.890 [2024-11-29 07:55:13.725284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.725326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.890 [2024-11-29 07:55:13.725334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.890 [2024-11-29 07:55:13.725340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.890 [2024-11-29 07:55:13.725345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.725356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.890 [2024-11-29 07:55:13.725363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.890 [2024-11-29 07:55:13.725368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.890 [2024-11-29 07:55:13.725376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.890 [2024-11-29 07:55:13.785211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.890 [2024-11-29 07:55:13.785239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.890 [2024-11-29 07:55:13.785248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.890 [2024-11-29 07:55:13.785254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.149 [2024-11-29 07:55:13.834556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.149 [2024-11-29 07:55:13.834623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.149 [2024-11-29 07:55:13.834668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.149 [2024-11-29 07:55:13.834755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:24.149 [2024-11-29 07:55:13.834795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.149 [2024-11-29 07:55:13.834845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.149 [2024-11-29 07:55:13.834887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.149 [2024-11-29 07:55:13.834893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.149 [2024-11-29 07:55:13.834899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.149 [2024-11-29 07:55:13.834988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.952 ms, result 0 00:23:24.717 00:23:24.717 00:23:24.717 07:55:14 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:26.100 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:26.100 07:55:16 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:26.359 [2024-11-29 07:55:16.049788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:26.359 [2024-11-29 07:55:16.049879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78784 ] 00:23:26.359 [2024-11-29 07:55:16.197776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.359 [2024-11-29 07:55:16.272916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.617 [2024-11-29 07:55:16.484932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.617 [2024-11-29 07:55:16.484983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.878 [2024-11-29 07:55:16.631940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.631978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.878 [2024-11-29 07:55:16.631988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.878 [2024-11-29 07:55:16.631994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.632028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.632037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.878 [2024-11-29 07:55:16.632043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:26.878 [2024-11-29 07:55:16.632049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.632061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.878 [2024-11-29 07:55:16.632604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.878 [2024-11-29 07:55:16.632621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.632627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.878 [2024-11-29 07:55:16.632633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:23:26.878 [2024-11-29 07:55:16.632638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.633583] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.878 [2024-11-29 07:55:16.643088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.643122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.878 [2024-11-29 07:55:16.643132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.506 ms 00:23:26.878 [2024-11-29 07:55:16.643138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.643184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.643191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.878 [2024-11-29 07:55:16.643198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:26.878 [2024-11-29 07:55:16.643203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.647624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.647645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.878 [2024-11-29 07:55:16.647652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.384 ms 00:23:26.878 [2024-11-29 07:55:16.647661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.647713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.647720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.878 [2024-11-29 07:55:16.647726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:26.878 [2024-11-29 07:55:16.647731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.647771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.647779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.878 [2024-11-29 07:55:16.647785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.878 [2024-11-29 07:55:16.647790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.647805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.878 [2024-11-29 07:55:16.650450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.650470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.878 [2024-11-29 07:55:16.650478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:23:26.878 [2024-11-29 07:55:16.650484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.650509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.650516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.878 [2024-11-29 07:55:16.650522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:26.878 [2024-11-29 07:55:16.650528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.650541] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.878 [2024-11-29 07:55:16.650556] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:26.878 [2024-11-29 07:55:16.650582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.878 [2024-11-29 07:55:16.650596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:26.878 [2024-11-29 07:55:16.650675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:26.878 [2024-11-29 07:55:16.650683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.878 [2024-11-29 07:55:16.650690] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:26.878 [2024-11-29 07:55:16.650698] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.878 [2024-11-29 07:55:16.650705] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.878 [2024-11-29 07:55:16.650711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:26.878 [2024-11-29 07:55:16.650717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.878 [2024-11-29 07:55:16.650724] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:26.878 [2024-11-29 07:55:16.650730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:26.878 [2024-11-29 07:55:16.650736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.650741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.878 [2024-11-29 07:55:16.650747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:26.878 [2024-11-29 07:55:16.650753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.650816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.878 [2024-11-29 07:55:16.650822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.878 [2024-11-29 07:55:16.650827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:26.878 [2024-11-29 07:55:16.650832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.878 [2024-11-29 07:55:16.650910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.878 [2024-11-29 07:55:16.650917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.878 [2024-11-29 07:55:16.650924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.878 [2024-11-29 07:55:16.650929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.878 [2024-11-29 07:55:16.650935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.878 [2024-11-29 07:55:16.650941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.878 [2024-11-29 07:55:16.650946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:26.878 [2024-11-29 07:55:16.650952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.878 [2024-11-29 07:55:16.650957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:26.878 [2024-11-29 07:55:16.650962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.878 [2024-11-29 07:55:16.650968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.878 [2024-11-29 07:55:16.650973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:26.878 [2024-11-29 07:55:16.650978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.878 [2024-11-29 07:55:16.650986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.878 [2024-11-29 07:55:16.650992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:26.878 [2024-11-29 07:55:16.650997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.878 [2024-11-29 07:55:16.651002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.878 [2024-11-29 07:55:16.651008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:26.878 [2024-11-29 07:55:16.651013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.879 [2024-11-29 07:55:16.651023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.879 [2024-11-29 07:55:16.651038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.879 [2024-11-29 07:55:16.651053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.879 [2024-11-29 07:55:16.651067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.879 [2024-11-29 07:55:16.651082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.879 [2024-11-29 07:55:16.651092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.879 [2024-11-29 07:55:16.651097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:26.879 [2024-11-29 07:55:16.651103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.879 [2024-11-29 07:55:16.651108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:26.879 [2024-11-29 07:55:16.651114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:26.879 [2024-11-29 07:55:16.651118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:26.879 [2024-11-29 07:55:16.651129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:26.879 [2024-11-29 07:55:16.651134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.879 [2024-11-29 07:55:16.651145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.879 [2024-11-29 07:55:16.651151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.879 [2024-11-29 07:55:16.651162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.879 [2024-11-29 07:55:16.651167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.879 [2024-11-29 07:55:16.651173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.879 [2024-11-29 07:55:16.651178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.879 [2024-11-29 07:55:16.651183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.879 [2024-11-29 07:55:16.651188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.879 [2024-11-29 07:55:16.651195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.879 [2024-11-29 07:55:16.651201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:26.879 [2024-11-29 07:55:16.651215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:26.879 [2024-11-29 07:55:16.651221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:26.879 [2024-11-29 07:55:16.651227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:26.879 [2024-11-29 07:55:16.651232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:26.879 [2024-11-29 07:55:16.651237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:26.879 [2024-11-29 07:55:16.651242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:26.879 [2024-11-29 07:55:16.651247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:26.879 [2024-11-29 07:55:16.651253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:26.879 [2024-11-29 07:55:16.651258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:26.879 [2024-11-29 07:55:16.651286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.879 [2024-11-29 07:55:16.651292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.879 [2024-11-29 07:55:16.651304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.879 [2024-11-29 07:55:16.651310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.879 [2024-11-29 07:55:16.651315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.879 [2024-11-29 07:55:16.651320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.651326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.879 [2024-11-29 07:55:16.651332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:23:26.879 [2024-11-29 07:55:16.651337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.672123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.672149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.879 [2024-11-29 07:55:16.672157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.756 ms 00:23:26.879 [2024-11-29 07:55:16.672165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.672228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.672234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.879 [2024-11-29 07:55:16.672240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:26.879 [2024-11-29 07:55:16.672246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.711951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.711982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.879 [2024-11-29 07:55:16.711991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.666 ms 00:23:26.879 [2024-11-29 07:55:16.711998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.712029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.712037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.879 [2024-11-29 07:55:16.712046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:26.879 [2024-11-29 07:55:16.712052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.712361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.712381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.879 [2024-11-29 07:55:16.712388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:26.879 [2024-11-29 07:55:16.712394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.879 [2024-11-29 07:55:16.712502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.879 [2024-11-29 07:55:16.712510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.879 [2024-11-29 07:55:16.712516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:26.880 [2024-11-29 07:55:16.712525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.723014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.723036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.880 [2024-11-29 07:55:16.723046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.473 ms 00:23:26.880 [2024-11-29 07:55:16.723051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.732789] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.880 [2024-11-29 07:55:16.732821] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.880 [2024-11-29 07:55:16.732830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.732837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.880 [2024-11-29 07:55:16.732843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.712 ms 00:23:26.880 [2024-11-29 07:55:16.732849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.751596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.751631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.880 [2024-11-29 07:55:16.751640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.716 ms 00:23:26.880 [2024-11-29 07:55:16.751646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.760663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.760686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.880 [2024-11-29 07:55:16.760693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:23:26.880 [2024-11-29 07:55:16.760700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.769305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.769328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.880 [2024-11-29 07:55:16.769335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.578 ms 00:23:26.880 [2024-11-29 07:55:16.769341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.769808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.769825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.880 [2024-11-29 07:55:16.769833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:23:26.880 [2024-11-29 07:55:16.769839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.880 [2024-11-29 07:55:16.813411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.880 [2024-11-29 07:55:16.813454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.880 [2024-11-29 07:55:16.813469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.558 ms 00:23:26.880 [2024-11-29 07:55:16.813476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.821194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:27.138 [2024-11-29 07:55:16.822923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.822944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:27.138 [2024-11-29 07:55:16.822952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.411 ms 00:23:27.138 [2024-11-29 07:55:16.822960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.823016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.823025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:27.138 [2024-11-29 07:55:16.823034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:27.138 [2024-11-29 07:55:16.823040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.823081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.823089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:27.138 [2024-11-29 07:55:16.823095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:27.138 [2024-11-29 07:55:16.823101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.823114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.823120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:27.138 [2024-11-29 07:55:16.823126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:27.138 [2024-11-29 07:55:16.823132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.823157] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:27.138 [2024-11-29 07:55:16.823165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.823170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:27.138 [2024-11-29 07:55:16.823176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:27.138 [2024-11-29 07:55:16.823182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.841121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.841146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:27.138 [2024-11-29 07:55:16.841158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:23:27.138 [2024-11-29 07:55:16.841164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.138 [2024-11-29 07:55:16.841223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.138 [2024-11-29 07:55:16.841231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:27.138 [2024-11-29 07:55:16.841237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:27.138 [2024-11-29 07:55:16.841243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.139 [2024-11-29 07:55:16.842001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.739 ms, result 0 00:23:28.084  [2024-11-29T07:55:18.975Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-29T07:55:19.921Z] Copying: 46/1024 [MB] (19 MBps) [2024-11-29T07:55:20.865Z] Copying: 59/1024 [MB] (12 MBps) [2024-11-29T07:55:22.255Z] Copying: 71/1024 [MB] (12 MBps) [2024-11-29T07:55:23.198Z] Copying: 93/1024 [MB] (21 MBps) [2024-11-29T07:55:24.145Z] Copying: 110/1024 [MB] (16 MBps) [2024-11-29T07:55:25.088Z] Copying: 129/1024 [MB] (19 MBps) [2024-11-29T07:55:26.035Z] Copying: 152/1024 [MB] (23 MBps) [2024-11-29T07:55:26.979Z] Copying: 175/1024 [MB] (22 MBps) [2024-11-29T07:55:27.926Z] Copying: 200/1024 [MB] (24 MBps) [2024-11-29T07:55:28.870Z] Copying: 220/1024 [MB] (20 MBps) [2024-11-29T07:55:30.260Z] Copying: 239/1024 [MB] (18 MBps) [2024-11-29T07:55:31.205Z] Copying: 259/1024 [MB] (20 MBps) [2024-11-29T07:55:32.150Z] Copying: 279/1024 [MB] (20 MBps) [2024-11-29T07:55:33.097Z] Copying: 301/1024 [MB] (21 MBps) [2024-11-29T07:55:34.042Z] Copying: 314/1024 [MB] (13 MBps) [2024-11-29T07:55:34.986Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-29T07:55:35.931Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-29T07:55:36.877Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-29T07:55:38.268Z] Copying: 357/1024 [MB] (10 MBps) [2024-11-29T07:55:39.214Z] Copying: 368/1024 [MB] (10 MBps) [2024-11-29T07:55:40.151Z] Copying: 378/1024 [MB] (10 MBps) [2024-11-29T07:55:41.095Z] Copying: 416/1024 [MB] (37 MBps) [2024-11-29T07:55:42.091Z] Copying: 435/1024 [MB] (19 MBps) [2024-11-29T07:55:43.062Z] Copying: 450/1024 [MB] (14 MBps) [2024-11-29T07:55:44.008Z] Copying: 463/1024 [MB] (13 MBps) [2024-11-29T07:55:44.953Z] Copying: 476/1024 [MB] (12 MBps) [2024-11-29T07:55:45.902Z] Copying: 490/1024 [MB] (13 MBps) [2024-11-29T07:55:47.290Z] Copying: 511/1024 [MB] (21 MBps) [2024-11-29T07:55:47.860Z] Copying: 533/1024 [MB] (21 MBps) [2024-11-29T07:55:49.248Z] Copying: 547/1024 [MB] (13 MBps) [2024-11-29T07:55:50.192Z] Copying: 562/1024 [MB] (15 MBps) [2024-11-29T07:55:51.136Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-29T07:55:52.081Z] Copying: 588/1024 [MB] (14 MBps) [2024-11-29T07:55:53.041Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-29T07:55:53.972Z] Copying: 642/1024 [MB] (36 MBps) [2024-11-29T07:55:54.913Z] Copying: 695/1024 [MB] (52 MBps) [2024-11-29T07:55:56.294Z] Copying: 721/1024 [MB] (26 MBps) [2024-11-29T07:55:56.867Z] Copying: 741/1024 [MB] (19 MBps) [2024-11-29T07:55:58.255Z] Copying: 754/1024 [MB] (13 MBps) [2024-11-29T07:55:59.197Z] Copying: 771/1024 [MB] (16 MBps) [2024-11-29T07:56:00.141Z] Copying: 787/1024 [MB] (15 MBps) [2024-11-29T07:56:01.092Z] Copying: 804/1024 [MB] (17 MBps) [2024-11-29T07:56:02.037Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-29T07:56:02.976Z] Copying: 832/1024 [MB] (16 MBps) [2024-11-29T07:56:03.920Z] Copying: 860/1024 [MB] (27 MBps) [2024-11-29T07:56:04.862Z] Copying: 873/1024 [MB] (13 MBps) [2024-11-29T07:56:06.247Z] Copying: 904688/1048576 [kB] (10240 kBps) [2024-11-29T07:56:07.184Z] Copying: 914872/1048576 [kB] (10184 kBps) [2024-11-29T07:56:08.127Z] Copying: 916/1024 [MB] (22 MBps) [2024-11-29T07:56:09.070Z] Copying: 928/1024 [MB] (12 MBps) [2024-11-29T07:56:10.016Z] Copying: 940/1024 [MB] (12 MBps) [2024-11-29T07:56:10.957Z] Copying: 953/1024 [MB] (12 MBps) [2024-11-29T07:56:11.902Z] Copying: 965/1024 [MB] (11 MBps) [2024-11-29T07:56:13.286Z] Copying: 978/1024 [MB] (13 MBps) [2024-11-29T07:56:13.886Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-29T07:56:15.270Z] Copying: 1022832/1048576 [kB] (9956 kBps) [2024-11-29T07:56:15.841Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-29T07:56:15.841Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 07:56:15.589891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.589977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.897 [2024-11-29 07:56:15.590008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.897 [2024-11-29 07:56:15.590017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.591011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.897 [2024-11-29 07:56:15.595898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.595952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.897 [2024-11-29 07:56:15.595965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:24:25.897 [2024-11-29 07:56:15.595974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.609693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.609762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.897 [2024-11-29 07:56:15.609777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.066 ms 00:24:25.897 [2024-11-29 07:56:15.609793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.632350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.632400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.897 [2024-11-29 07:56:15.632413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.538 ms 00:24:25.897 [2024-11-29 07:56:15.632422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.638597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.638637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.897 [2024-11-29 07:56:15.638650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:24:25.897 [2024-11-29 07:56:15.638667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.665317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.665369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.897 [2024-11-29 07:56:15.665383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.587 ms 00:24:25.897 [2024-11-29 07:56:15.665391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.897 [2024-11-29 07:56:15.680810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.897 [2024-11-29 07:56:15.680863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.897 [2024-11-29 07:56:15.680876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.371 ms 00:24:25.897 [2024-11-29 07:56:15.680885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.863752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.160 [2024-11-29 07:56:15.863821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.160 [2024-11-29 07:56:15.863835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 182.814 ms 00:24:26.160 [2024-11-29 07:56:15.863844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.889382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.160 [2024-11-29 07:56:15.889436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.160 [2024-11-29 07:56:15.889467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.521 ms 00:24:26.160 [2024-11-29 07:56:15.889475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.915035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.160 [2024-11-29 07:56:15.915083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.160 [2024-11-29 07:56:15.915095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.514 ms 00:24:26.160 [2024-11-29 07:56:15.915103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.939975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.160 [2024-11-29 07:56:15.940016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.160 [2024-11-29 07:56:15.940028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:24:26.160 [2024-11-29 07:56:15.940037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.964967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.160 [2024-11-29 07:56:15.965013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.160 [2024-11-29 07:56:15.965025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.857 ms 00:24:26.160 [2024-11-29 07:56:15.965033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.160 [2024-11-29 07:56:15.965077] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.160 [2024-11-29 07:56:15.965094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101376 / 261120 wr_cnt: 1 state: open 00:24:26.160 [2024-11-29 07:56:15.965105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.160 [2024-11-29 07:56:15.965206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.161 [2024-11-29 07:56:15.965921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.161 [2024-11-29 07:56:15.965929] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:24:26.161 [2024-11-29 07:56:15.965937] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101376 00:24:26.162 [2024-11-29 07:56:15.965946] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102336 00:24:26.162 [2024-11-29 07:56:15.965955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101376 00:24:26.162 [2024-11-29 07:56:15.965965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:24:26.162 [2024-11-29 07:56:15.965985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.162 [2024-11-29 07:56:15.965994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.162 [2024-11-29 07:56:15.966003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.162 [2024-11-29 07:56:15.966011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.162 [2024-11-29 07:56:15.966019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.162 [2024-11-29 07:56:15.966028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.162 [2024-11-29 07:56:15.966036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.162 [2024-11-29 07:56:15.966044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:24:26.162 [2024-11-29 07:56:15.966051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:15.979470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.162 [2024-11-29 07:56:15.979515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.162 [2024-11-29 07:56:15.979534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.399 ms 00:24:26.162 [2024-11-29 07:56:15.979542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:15.979943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.162 [2024-11-29 07:56:15.979967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.162 [2024-11-29 07:56:15.979978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:24:26.162 [2024-11-29 07:56:15.979986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:16.016579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.162 [2024-11-29 07:56:16.016633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.162 [2024-11-29 07:56:16.016645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.162 [2024-11-29 07:56:16.016655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:16.016722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.162 [2024-11-29 07:56:16.016733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.162 [2024-11-29 07:56:16.016743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.162 [2024-11-29 07:56:16.016753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:16.016849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.162 [2024-11-29 07:56:16.016867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.162 [2024-11-29 07:56:16.016877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.162 [2024-11-29 07:56:16.016886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:16.016903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.162 [2024-11-29 07:56:16.016913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.162 [2024-11-29 07:56:16.016922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.162 [2024-11-29 07:56:16.016931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.162 [2024-11-29 07:56:16.101141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.162 [2024-11-29 07:56:16.101203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.162 [2024-11-29 07:56:16.101215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.162 [2024-11-29 07:56:16.101224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.424 [2024-11-29 07:56:16.170123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.424 [2024-11-29 07:56:16.170231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.424 [2024-11-29 07:56:16.170308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.424 [2024-11-29 07:56:16.170435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.424 [2024-11-29 07:56:16.170532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.424 [2024-11-29 07:56:16.170603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.424 [2024-11-29 07:56:16.170677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.424 [2024-11-29 07:56:16.170686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.424 [2024-11-29 07:56:16.170694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.424 [2024-11-29 07:56:16.170835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.841 ms, result 0 00:24:27.810 00:24:27.810 00:24:27.810 07:56:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:27.810 [2024-11-29 07:56:17.564227] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:27.810 [2024-11-29 07:56:17.564385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79413 ] 00:24:27.810 [2024-11-29 07:56:17.727883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.071 [2024-11-29 07:56:17.846839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.333 [2024-11-29 07:56:18.143201] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.333 [2024-11-29 07:56:18.143293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.595 [2024-11-29 07:56:18.305115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.305177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.595 [2024-11-29 07:56:18.305194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:28.595 [2024-11-29 07:56:18.305203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.305257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.305270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.595 [2024-11-29 07:56:18.305280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:28.595 [2024-11-29 07:56:18.305288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.305309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.595 [2024-11-29 07:56:18.306071] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.595 [2024-11-29 07:56:18.306102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.306112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.595 [2024-11-29 07:56:18.306121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:24:28.595 [2024-11-29 07:56:18.306129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.307839] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.595 [2024-11-29 07:56:18.322243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.322298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.595 [2024-11-29 07:56:18.322312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.406 ms 00:24:28.595 [2024-11-29 07:56:18.322322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.322400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.322410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.595 [2024-11-29 07:56:18.322420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:28.595 [2024-11-29 07:56:18.322428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.330613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.330655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.595 [2024-11-29 07:56:18.330666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.089 ms 00:24:28.595 [2024-11-29 07:56:18.330680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.330761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.330770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.595 [2024-11-29 07:56:18.330779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:28.595 [2024-11-29 07:56:18.330788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.330833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.330844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.595 [2024-11-29 07:56:18.330852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:28.595 [2024-11-29 07:56:18.330861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.330889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.595 [2024-11-29 07:56:18.334960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.335002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.595 [2024-11-29 07:56:18.335016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:24:28.595 [2024-11-29 07:56:18.335025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.335061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.595 [2024-11-29 07:56:18.335070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.595 [2024-11-29 07:56:18.335080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:28.595 [2024-11-29 07:56:18.335088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.595 [2024-11-29 07:56:18.335140] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.595 [2024-11-29 07:56:18.335164] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:28.595 [2024-11-29 07:56:18.335202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.596 [2024-11-29 07:56:18.335223] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:28.596 [2024-11-29 07:56:18.335329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:28.596 [2024-11-29 07:56:18.335340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.596 [2024-11-29 07:56:18.335351] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:28.596 [2024-11-29 07:56:18.335363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335381] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:28.596 [2024-11-29 07:56:18.335390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.596 [2024-11-29 07:56:18.335401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:28.596 [2024-11-29 07:56:18.335409] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:28.596 [2024-11-29 07:56:18.335418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.596 [2024-11-29 07:56:18.335426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.596 [2024-11-29 07:56:18.335434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:24:28.596 [2024-11-29 07:56:18.335458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.596 [2024-11-29 07:56:18.335543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.596 [2024-11-29 07:56:18.335552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.596 [2024-11-29 07:56:18.335561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:28.596 [2024-11-29 07:56:18.335569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.596 [2024-11-29 07:56:18.335676] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.596 [2024-11-29 07:56:18.335696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.596 [2024-11-29 07:56:18.335705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.596 [2024-11-29 07:56:18.335728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.596 [2024-11-29 07:56:18.335750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.596 [2024-11-29 07:56:18.335764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.596 [2024-11-29 07:56:18.335772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:28.596 [2024-11-29 07:56:18.335781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.596 [2024-11-29 07:56:18.335795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.596 [2024-11-29 07:56:18.335802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:28.596 [2024-11-29 07:56:18.335808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.596 [2024-11-29 07:56:18.335821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.596 [2024-11-29 07:56:18.335842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.596 [2024-11-29 07:56:18.335863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.596 [2024-11-29 07:56:18.335883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.596 [2024-11-29 07:56:18.335904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.596 [2024-11-29 07:56:18.335918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.596 [2024-11-29 07:56:18.335924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.596 [2024-11-29 07:56:18.335938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.596 [2024-11-29 07:56:18.335945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:28.596 [2024-11-29 07:56:18.335951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.596 [2024-11-29 07:56:18.335959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:28.596 [2024-11-29 07:56:18.335966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:28.596 [2024-11-29 07:56:18.335973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.335980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:28.596 [2024-11-29 07:56:18.335986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:28.596 [2024-11-29 07:56:18.335993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.336001] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.596 [2024-11-29 07:56:18.336010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.596 [2024-11-29 07:56:18.336019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.596 [2024-11-29 07:56:18.336026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.596 [2024-11-29 07:56:18.336035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.596 [2024-11-29 07:56:18.336042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.596 [2024-11-29 07:56:18.336049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.596 [2024-11-29 07:56:18.336056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.596 [2024-11-29 07:56:18.336062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.596 [2024-11-29 07:56:18.336070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.596 [2024-11-29 07:56:18.336078] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.596 [2024-11-29 07:56:18.336088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:28.596 [2024-11-29 07:56:18.336107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:28.596 [2024-11-29 07:56:18.336114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:28.596 [2024-11-29 07:56:18.336121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:28.596 [2024-11-29 07:56:18.336128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:28.596 [2024-11-29 07:56:18.336137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:28.596 [2024-11-29 07:56:18.336144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:28.596 [2024-11-29 07:56:18.336152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:28.596 [2024-11-29 07:56:18.336159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:28.596 [2024-11-29 07:56:18.336167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:28.596 [2024-11-29 07:56:18.336204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.596 [2024-11-29 07:56:18.336212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.596 [2024-11-29 07:56:18.336229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.596 [2024-11-29 07:56:18.336236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.596 [2024-11-29 07:56:18.336244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.596 [2024-11-29 07:56:18.336252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.596 [2024-11-29 07:56:18.336261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.596 [2024-11-29 07:56:18.336269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:24:28.596 [2024-11-29 07:56:18.336277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.596 [2024-11-29 07:56:18.368595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.596 [2024-11-29 07:56:18.368647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.597 [2024-11-29 07:56:18.368660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.273 ms 00:24:28.597 [2024-11-29 07:56:18.368673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.368764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.368796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:28.597 [2024-11-29 07:56:18.368805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:28.597 [2024-11-29 07:56:18.368813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.420009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.420064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.597 [2024-11-29 07:56:18.420077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.131 ms 00:24:28.597 [2024-11-29 07:56:18.420087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.420137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.420148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.597 [2024-11-29 07:56:18.420161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.597 [2024-11-29 07:56:18.420169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.420817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.420856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.597 [2024-11-29 07:56:18.420867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:28.597 [2024-11-29 07:56:18.420876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.421027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.421038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.597 [2024-11-29 07:56:18.421052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:24:28.597 [2024-11-29 07:56:18.421060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.436839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.436886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.597 [2024-11-29 07:56:18.436898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.758 ms 00:24:28.597 [2024-11-29 07:56:18.436906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.451211] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:28.597 [2024-11-29 07:56:18.451259] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:28.597 [2024-11-29 07:56:18.451273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.451283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:28.597 [2024-11-29 07:56:18.451292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.255 ms 00:24:28.597 [2024-11-29 07:56:18.451300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.477026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.477079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:28.597 [2024-11-29 07:56:18.477091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.665 ms 00:24:28.597 [2024-11-29 07:56:18.477099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.489686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.489734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:28.597 [2024-11-29 07:56:18.489746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:24:28.597 [2024-11-29 07:56:18.489754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.502462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.502513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:28.597 [2024-11-29 07:56:18.502524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:24:28.597 [2024-11-29 07:56:18.502532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.597 [2024-11-29 07:56:18.503178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.597 [2024-11-29 07:56:18.503210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:28.597 [2024-11-29 07:56:18.503224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:24:28.597 [2024-11-29 07:56:18.503233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.858 [2024-11-29 07:56:18.568155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.568225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:28.859 [2024-11-29 07:56:18.568249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.900 ms 00:24:28.859 [2024-11-29 07:56:18.568258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.579566] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:28.859 [2024-11-29 07:56:18.582561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.582603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:28.859 [2024-11-29 07:56:18.582616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.242 ms 00:24:28.859 [2024-11-29 07:56:18.582625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.582717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.582730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:28.859 [2024-11-29 07:56:18.582743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:28.859 [2024-11-29 07:56:18.582752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.584469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.584517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:28.859 [2024-11-29 07:56:18.584528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:24:28.859 [2024-11-29 07:56:18.584536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.584567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.584576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:28.859 [2024-11-29 07:56:18.584585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:28.859 [2024-11-29 07:56:18.584593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.584640] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:28.859 [2024-11-29 07:56:18.584652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.584661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:28.859 [2024-11-29 07:56:18.584671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:28.859 [2024-11-29 07:56:18.584679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.610988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.611040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:28.859 [2024-11-29 07:56:18.611059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.288 ms 00:24:28.859 [2024-11-29 07:56:18.611067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.611163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.859 [2024-11-29 07:56:18.611175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:28.859 [2024-11-29 07:56:18.611185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:28.859 [2024-11-29 07:56:18.611194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.859 [2024-11-29 07:56:18.612980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.324 ms, result 0 00:24:30.242  [2024-11-29T07:56:21.128Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-29T07:56:22.074Z] Copying: 34/1024 [MB] (20 MBps) [2024-11-29T07:56:23.019Z] Copying: 53/1024 [MB] (19 MBps) [2024-11-29T07:56:23.963Z] Copying: 73/1024 [MB] (19 MBps) [2024-11-29T07:56:24.909Z] Copying: 96/1024 [MB] (22 MBps) [2024-11-29T07:56:25.854Z] Copying: 114/1024 [MB] (18 MBps) [2024-11-29T07:56:27.244Z] Copying: 124/1024 [MB] (10 MBps) [2024-11-29T07:56:27.818Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-29T07:56:29.205Z] Copying: 146/1024 [MB] (10 MBps) [2024-11-29T07:56:30.152Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-29T07:56:31.099Z] Copying: 166/1024 [MB] (10 MBps) [2024-11-29T07:56:32.043Z] Copying: 177/1024 [MB] (10 MBps) [2024-11-29T07:56:32.984Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-29T07:56:33.930Z] Copying: 204/1024 [MB] (17 MBps) [2024-11-29T07:56:34.875Z] Copying: 227/1024 [MB] (22 MBps) [2024-11-29T07:56:35.817Z] Copying: 248/1024 [MB] (20 MBps) [2024-11-29T07:56:37.206Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-29T07:56:38.150Z] Copying: 284/1024 [MB] (19 MBps) [2024-11-29T07:56:39.091Z] Copying: 297/1024 [MB] (12 MBps) [2024-11-29T07:56:40.032Z] Copying: 312/1024 [MB] (14 MBps) [2024-11-29T07:56:40.972Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-29T07:56:41.918Z] Copying: 352/1024 [MB] (22 MBps) [2024-11-29T07:56:42.862Z] Copying: 374/1024 [MB] (22 MBps) [2024-11-29T07:56:43.808Z] Copying: 393/1024 [MB] (19 MBps) [2024-11-29T07:56:45.270Z] Copying: 410/1024 [MB] (16 MBps) [2024-11-29T07:56:45.863Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-29T07:56:46.808Z] Copying: 450/1024 [MB] (23 MBps) [2024-11-29T07:56:48.197Z] Copying: 474/1024 [MB] (24 MBps) [2024-11-29T07:56:49.140Z] Copying: 494/1024 [MB] (20 MBps) [2024-11-29T07:56:50.083Z] Copying: 517/1024 [MB] (22 MBps) [2024-11-29T07:56:51.027Z] Copying: 530/1024 [MB] (13 MBps) [2024-11-29T07:56:51.972Z] Copying: 543/1024 [MB] (12 MBps) [2024-11-29T07:56:52.918Z] Copying: 553/1024 [MB] (10 MBps) [2024-11-29T07:56:53.864Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-29T07:56:54.810Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-29T07:56:56.196Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-29T07:56:57.142Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-29T07:56:58.084Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-29T07:56:59.028Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-29T07:56:59.968Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-29T07:57:00.910Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-29T07:57:01.852Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-29T07:57:03.240Z] Copying: 660/1024 [MB] (10 MBps) [2024-11-29T07:57:03.815Z] Copying: 676/1024 [MB] (16 MBps) [2024-11-29T07:57:05.204Z] Copying: 696/1024 [MB] (20 MBps) [2024-11-29T07:57:06.149Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-29T07:57:07.094Z] Copying: 720/1024 [MB] (13 MBps) [2024-11-29T07:57:08.038Z] Copying: 731/1024 [MB] (10 MBps) [2024-11-29T07:57:08.982Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-29T07:57:09.929Z] Copying: 751/1024 [MB] (10 MBps) [2024-11-29T07:57:10.875Z] Copying: 765/1024 [MB] (13 MBps) [2024-11-29T07:57:11.820Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-29T07:57:13.206Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-29T07:57:14.151Z] Copying: 803/1024 [MB] (16 MBps) [2024-11-29T07:57:15.095Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-29T07:57:16.036Z] Copying: 829/1024 [MB] (14 MBps) [2024-11-29T07:57:17.083Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-29T07:57:18.022Z] Copying: 854/1024 [MB] (14 MBps) [2024-11-29T07:57:18.964Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-29T07:57:19.903Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-29T07:57:20.844Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-29T07:57:22.231Z] Copying: 907/1024 [MB] (16 MBps) [2024-11-29T07:57:23.175Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-29T07:57:24.119Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-29T07:57:25.064Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-29T07:57:26.004Z] Copying: 952/1024 [MB] (12 MBps) [2024-11-29T07:57:26.950Z] Copying: 967/1024 [MB] (15 MBps) [2024-11-29T07:57:27.895Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-29T07:57:28.840Z] Copying: 999/1024 [MB] (14 MBps) [2024-11-29T07:57:29.413Z] Copying: 1016/1024 [MB] (16 MBps) [2024-11-29T07:57:29.413Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 07:57:29.370438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.469 [2024-11-29 07:57:29.370570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:39.469 [2024-11-29 07:57:29.370612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:39.469 [2024-11-29 07:57:29.370629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.469 [2024-11-29 07:57:29.370669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:39.469 [2024-11-29 07:57:29.373874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.469 [2024-11-29 07:57:29.373914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:39.469 [2024-11-29 07:57:29.373925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.177 ms 00:25:39.469 [2024-11-29 07:57:29.373934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.469 [2024-11-29 07:57:29.374174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.469 [2024-11-29 07:57:29.374185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:39.469 [2024-11-29 07:57:29.374194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:25:39.469 [2024-11-29 07:57:29.374207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.469 [2024-11-29 07:57:29.380897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.469 [2024-11-29 07:57:29.381073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:39.469 [2024-11-29 07:57:29.381150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.673 ms 00:25:39.469 [2024-11-29 07:57:29.381176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.469 [2024-11-29 07:57:29.387369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.469 [2024-11-29 07:57:29.387548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:39.469 [2024-11-29 07:57:29.387727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:25:39.469 [2024-11-29 07:57:29.387775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.731 [2024-11-29 07:57:29.414340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.731 [2024-11-29 07:57:29.414524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:39.731 [2024-11-29 07:57:29.414592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.501 ms 00:25:39.731 [2024-11-29 07:57:29.414616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.731 [2024-11-29 07:57:29.430614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.731 [2024-11-29 07:57:29.430774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:39.731 [2024-11-29 07:57:29.430846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.951 ms 00:25:39.731 [2024-11-29 07:57:29.430871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.993 [2024-11-29 07:57:29.687141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.993 [2024-11-29 07:57:29.687323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:39.993 [2024-11-29 07:57:29.687389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 256.191 ms 00:25:39.993 [2024-11-29 07:57:29.687413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.993 [2024-11-29 07:57:29.713187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.993 [2024-11-29 07:57:29.713342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:39.993 [2024-11-29 07:57:29.713404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.729 ms 00:25:39.993 [2024-11-29 07:57:29.713426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.993 [2024-11-29 07:57:29.738619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.993 [2024-11-29 07:57:29.738784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:39.994 [2024-11-29 07:57:29.738853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.067 ms 00:25:39.994 [2024-11-29 07:57:29.738876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.994 [2024-11-29 07:57:29.763272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.994 [2024-11-29 07:57:29.763461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:39.994 [2024-11-29 07:57:29.763880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.287 ms 00:25:39.994 [2024-11-29 07:57:29.763929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.994 [2024-11-29 07:57:29.788121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.994 [2024-11-29 07:57:29.788296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:39.994 [2024-11-29 07:57:29.788367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.030 ms 00:25:39.994 [2024-11-29 07:57:29.788389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.994 [2024-11-29 07:57:29.788530] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:39.994 [2024-11-29 07:57:29.788662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:39.994 [2024-11-29 07:57:29.788700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.789973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.790999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:39.994 [2024-11-29 07:57:29.791183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:39.995 [2024-11-29 07:57:29.791378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:39.995 [2024-11-29 07:57:29.791388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f1401f1-4ff6-49ff-b356-dcb7b33876dc 00:25:39.995 [2024-11-29 07:57:29.791397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:39.995 [2024-11-29 07:57:29.791405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30656 00:25:39.995 [2024-11-29 07:57:29.791413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29696 00:25:39.995 [2024-11-29 07:57:29.791423] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0323 00:25:39.995 [2024-11-29 07:57:29.791434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:39.995 [2024-11-29 07:57:29.791464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:39.995 [2024-11-29 07:57:29.791473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:39.995 [2024-11-29 07:57:29.791480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:39.995 [2024-11-29 07:57:29.791487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:39.995 [2024-11-29 07:57:29.791497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.995 [2024-11-29 07:57:29.791506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:39.995 [2024-11-29 07:57:29.791515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:25:39.995 [2024-11-29 07:57:29.791523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.804900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.995 [2024-11-29 07:57:29.805051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:39.995 [2024-11-29 07:57:29.805074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.344 ms 00:25:39.995 [2024-11-29 07:57:29.805083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.805511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.995 [2024-11-29 07:57:29.805527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:39.995 [2024-11-29 07:57:29.805537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:25:39.995 [2024-11-29 07:57:29.805545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.841779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.995 [2024-11-29 07:57:29.841830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.995 [2024-11-29 07:57:29.841843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.995 [2024-11-29 07:57:29.841853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.841918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.995 [2024-11-29 07:57:29.841929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.995 [2024-11-29 07:57:29.841939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.995 [2024-11-29 07:57:29.841949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.842020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.995 [2024-11-29 07:57:29.842031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.995 [2024-11-29 07:57:29.842045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.995 [2024-11-29 07:57:29.842055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.842072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.995 [2024-11-29 07:57:29.842082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.995 [2024-11-29 07:57:29.842091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.995 [2024-11-29 07:57:29.842100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.995 [2024-11-29 07:57:29.925321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.995 [2024-11-29 07:57:29.925382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.995 [2024-11-29 07:57:29.925394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.995 [2024-11-29 07:57:29.925403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.257 [2024-11-29 07:57:29.993078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.257 [2024-11-29 07:57:29.993186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.257 [2024-11-29 07:57:29.993258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.257 [2024-11-29 07:57:29.993384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.257 [2024-11-29 07:57:29.993475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.257 [2024-11-29 07:57:29.993546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.257 [2024-11-29 07:57:29.993616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.257 [2024-11-29 07:57:29.993624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.257 [2024-11-29 07:57:29.993633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.257 [2024-11-29 07:57:29.993769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 623.307 ms, result 0 00:25:40.830 00:25:40.830 00:25:40.830 07:57:30 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.380 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:43.380 Process with pid 77289 is not found 00:25:43.380 Remove shared memory files 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77289 00:25:43.380 07:57:33 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77289 ']' 00:25:43.380 07:57:33 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77289 00:25:43.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77289) - No such process 00:25:43.380 07:57:33 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77289 is not found' 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:43.380 07:57:33 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:43.380 ************************************ 00:25:43.380 END TEST ftl_restore 00:25:43.380 ************************************ 00:25:43.380 00:25:43.380 real 4m41.574s 00:25:43.380 user 4m29.547s 00:25:43.380 sys 0m11.772s 00:25:43.380 07:57:33 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.380 07:57:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:43.380 07:57:33 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.380 07:57:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:43.380 07:57:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.380 07:57:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:43.380 ************************************ 00:25:43.380 START TEST ftl_dirty_shutdown 00:25:43.380 ************************************ 00:25:43.380 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.380 * Looking for test storage... 00:25:43.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.380 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:43.380 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:43.380 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:43.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.641 --rc genhtml_branch_coverage=1 00:25:43.641 --rc genhtml_function_coverage=1 00:25:43.641 --rc genhtml_legend=1 00:25:43.641 --rc geninfo_all_blocks=1 00:25:43.641 --rc geninfo_unexecuted_blocks=1 00:25:43.641 00:25:43.641 ' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:43.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.641 --rc genhtml_branch_coverage=1 00:25:43.641 --rc genhtml_function_coverage=1 00:25:43.641 --rc genhtml_legend=1 00:25:43.641 --rc geninfo_all_blocks=1 00:25:43.641 --rc geninfo_unexecuted_blocks=1 00:25:43.641 00:25:43.641 ' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:43.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.641 --rc genhtml_branch_coverage=1 00:25:43.641 --rc genhtml_function_coverage=1 00:25:43.641 --rc genhtml_legend=1 00:25:43.641 --rc geninfo_all_blocks=1 00:25:43.641 --rc geninfo_unexecuted_blocks=1 00:25:43.641 00:25:43.641 ' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:43.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.641 --rc genhtml_branch_coverage=1 00:25:43.641 --rc genhtml_function_coverage=1 00:25:43.641 --rc genhtml_legend=1 00:25:43.641 --rc geninfo_all_blocks=1 00:25:43.641 --rc geninfo_unexecuted_blocks=1 00:25:43.641 00:25:43.641 ' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80244 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80244 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80244 ']' 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.641 07:57:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 [2024-11-29 07:57:33.498797] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:43.641 [2024-11-29 07:57:33.498948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80244 ] 00:25:43.902 [2024-11-29 07:57:33.665652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.902 [2024-11-29 07:57:33.782271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:44.847 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:45.107 { 00:25:45.107 "name": "nvme0n1", 00:25:45.107 "aliases": [ 00:25:45.107 "336718d7-c3c4-44e4-be97-66bd56ecbdd6" 00:25:45.107 ], 00:25:45.107 "product_name": "NVMe disk", 00:25:45.107 "block_size": 4096, 00:25:45.107 "num_blocks": 1310720, 00:25:45.107 "uuid": "336718d7-c3c4-44e4-be97-66bd56ecbdd6", 00:25:45.107 "numa_id": -1, 00:25:45.107 "assigned_rate_limits": { 00:25:45.107 "rw_ios_per_sec": 0, 00:25:45.107 "rw_mbytes_per_sec": 0, 00:25:45.107 "r_mbytes_per_sec": 0, 00:25:45.107 "w_mbytes_per_sec": 0 00:25:45.107 }, 00:25:45.107 "claimed": true, 00:25:45.107 "claim_type": "read_many_write_one", 00:25:45.107 "zoned": false, 00:25:45.107 "supported_io_types": { 00:25:45.107 "read": true, 00:25:45.107 "write": true, 00:25:45.107 "unmap": true, 00:25:45.107 "flush": true, 00:25:45.107 "reset": true, 00:25:45.107 "nvme_admin": true, 00:25:45.107 "nvme_io": true, 00:25:45.107 "nvme_io_md": false, 00:25:45.107 "write_zeroes": true, 00:25:45.107 "zcopy": false, 00:25:45.107 "get_zone_info": false, 00:25:45.107 "zone_management": false, 00:25:45.107 "zone_append": false, 00:25:45.107 "compare": true, 00:25:45.107 "compare_and_write": false, 00:25:45.107 "abort": true, 00:25:45.107 "seek_hole": false, 00:25:45.107 "seek_data": false, 00:25:45.107 "copy": true, 00:25:45.107 "nvme_iov_md": false 00:25:45.107 }, 00:25:45.107 "driver_specific": { 00:25:45.107 "nvme": [ 00:25:45.107 { 00:25:45.107 "pci_address": "0000:00:11.0", 00:25:45.107 "trid": { 00:25:45.107 "trtype": "PCIe", 00:25:45.107 "traddr": "0000:00:11.0" 00:25:45.107 }, 00:25:45.107 "ctrlr_data": { 00:25:45.107 "cntlid": 0, 00:25:45.107 "vendor_id": "0x1b36", 00:25:45.107 "model_number": "QEMU NVMe Ctrl", 00:25:45.107 "serial_number": "12341", 00:25:45.107 "firmware_revision": "8.0.0", 00:25:45.107 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:45.107 "oacs": { 00:25:45.107 "security": 0, 00:25:45.107 "format": 1, 00:25:45.107 "firmware": 0, 00:25:45.107 "ns_manage": 1 00:25:45.107 }, 00:25:45.107 "multi_ctrlr": false, 00:25:45.107 "ana_reporting": false 00:25:45.107 }, 00:25:45.107 "vs": { 00:25:45.107 "nvme_version": "1.4" 00:25:45.107 }, 00:25:45.107 "ns_data": { 00:25:45.107 "id": 1, 00:25:45.107 "can_share": false 00:25:45.107 } 00:25:45.107 } 00:25:45.107 ], 00:25:45.107 "mp_policy": "active_passive" 00:25:45.107 } 00:25:45.107 } 00:25:45.107 ]' 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:45.107 07:57:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:45.367 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=aec35b25-6979-4ca1-a6c1-137abafd51c7 00:25:45.367 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:45.367 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aec35b25-6979-4ca1-a6c1-137abafd51c7 00:25:45.628 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:45.888 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a88a7448-de78-419a-9cbb-10532c28d868 00:25:45.888 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a88a7448-de78-419a-9cbb-10532c28d868 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:46.148 07:57:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.408 { 00:25:46.408 "name": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:46.408 "aliases": [ 00:25:46.408 "lvs/nvme0n1p0" 00:25:46.408 ], 00:25:46.408 "product_name": "Logical Volume", 00:25:46.408 "block_size": 4096, 00:25:46.408 "num_blocks": 26476544, 00:25:46.408 "uuid": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:46.408 "assigned_rate_limits": { 00:25:46.408 "rw_ios_per_sec": 0, 00:25:46.408 "rw_mbytes_per_sec": 0, 00:25:46.408 "r_mbytes_per_sec": 0, 00:25:46.408 "w_mbytes_per_sec": 0 00:25:46.408 }, 00:25:46.408 "claimed": false, 00:25:46.408 "zoned": false, 00:25:46.408 "supported_io_types": { 00:25:46.408 "read": true, 00:25:46.408 "write": true, 00:25:46.408 "unmap": true, 00:25:46.408 "flush": false, 00:25:46.408 "reset": true, 00:25:46.408 "nvme_admin": false, 00:25:46.408 "nvme_io": false, 00:25:46.408 "nvme_io_md": false, 00:25:46.408 "write_zeroes": true, 00:25:46.408 "zcopy": false, 00:25:46.408 "get_zone_info": false, 00:25:46.408 "zone_management": false, 00:25:46.408 "zone_append": false, 00:25:46.408 "compare": false, 00:25:46.408 "compare_and_write": false, 00:25:46.408 "abort": false, 00:25:46.408 "seek_hole": true, 00:25:46.408 "seek_data": true, 00:25:46.408 "copy": false, 00:25:46.408 "nvme_iov_md": false 00:25:46.408 }, 00:25:46.408 "driver_specific": { 00:25:46.408 "lvol": { 00:25:46.408 "lvol_store_uuid": "a88a7448-de78-419a-9cbb-10532c28d868", 00:25:46.408 "base_bdev": "nvme0n1", 00:25:46.408 "thin_provision": true, 00:25:46.408 "num_allocated_clusters": 0, 00:25:46.408 "snapshot": false, 00:25:46.408 "clone": false, 00:25:46.408 "esnap_clone": false 00:25:46.408 } 00:25:46.408 } 00:25:46.408 } 00:25:46.408 ]' 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:46.408 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:46.667 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.926 { 00:25:46.926 "name": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:46.926 "aliases": [ 00:25:46.926 "lvs/nvme0n1p0" 00:25:46.926 ], 00:25:46.926 "product_name": "Logical Volume", 00:25:46.926 "block_size": 4096, 00:25:46.926 "num_blocks": 26476544, 00:25:46.926 "uuid": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:46.926 "assigned_rate_limits": { 00:25:46.926 "rw_ios_per_sec": 0, 00:25:46.926 "rw_mbytes_per_sec": 0, 00:25:46.926 "r_mbytes_per_sec": 0, 00:25:46.926 "w_mbytes_per_sec": 0 00:25:46.926 }, 00:25:46.926 "claimed": false, 00:25:46.926 "zoned": false, 00:25:46.926 "supported_io_types": { 00:25:46.926 "read": true, 00:25:46.926 "write": true, 00:25:46.926 "unmap": true, 00:25:46.926 "flush": false, 00:25:46.926 "reset": true, 00:25:46.926 "nvme_admin": false, 00:25:46.926 "nvme_io": false, 00:25:46.926 "nvme_io_md": false, 00:25:46.926 "write_zeroes": true, 00:25:46.926 "zcopy": false, 00:25:46.926 "get_zone_info": false, 00:25:46.926 "zone_management": false, 00:25:46.926 "zone_append": false, 00:25:46.926 "compare": false, 00:25:46.926 "compare_and_write": false, 00:25:46.926 "abort": false, 00:25:46.926 "seek_hole": true, 00:25:46.926 "seek_data": true, 00:25:46.926 "copy": false, 00:25:46.926 "nvme_iov_md": false 00:25:46.926 }, 00:25:46.926 "driver_specific": { 00:25:46.926 "lvol": { 00:25:46.926 "lvol_store_uuid": "a88a7448-de78-419a-9cbb-10532c28d868", 00:25:46.926 "base_bdev": "nvme0n1", 00:25:46.926 "thin_provision": true, 00:25:46.926 "num_allocated_clusters": 0, 00:25:46.926 "snapshot": false, 00:25:46.926 "clone": false, 00:25:46.926 "esnap_clone": false 00:25:46.926 } 00:25:46.926 } 00:25:46.926 } 00:25:46.926 ]' 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:46.926 07:57:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:47.185 07:57:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 00:25:47.185 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.185 { 00:25:47.185 "name": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:47.185 "aliases": [ 00:25:47.185 "lvs/nvme0n1p0" 00:25:47.185 ], 00:25:47.185 "product_name": "Logical Volume", 00:25:47.185 "block_size": 4096, 00:25:47.185 "num_blocks": 26476544, 00:25:47.185 "uuid": "613ed6b3-acb1-4bbd-b21e-9ce21b4271d1", 00:25:47.185 "assigned_rate_limits": { 00:25:47.185 "rw_ios_per_sec": 0, 00:25:47.185 "rw_mbytes_per_sec": 0, 00:25:47.185 "r_mbytes_per_sec": 0, 00:25:47.185 "w_mbytes_per_sec": 0 00:25:47.185 }, 00:25:47.185 "claimed": false, 00:25:47.185 "zoned": false, 00:25:47.185 "supported_io_types": { 00:25:47.185 "read": true, 00:25:47.185 "write": true, 00:25:47.185 "unmap": true, 00:25:47.185 "flush": false, 00:25:47.185 "reset": true, 00:25:47.185 "nvme_admin": false, 00:25:47.185 "nvme_io": false, 00:25:47.185 "nvme_io_md": false, 00:25:47.185 "write_zeroes": true, 00:25:47.185 "zcopy": false, 00:25:47.185 "get_zone_info": false, 00:25:47.185 "zone_management": false, 00:25:47.185 "zone_append": false, 00:25:47.185 "compare": false, 00:25:47.185 "compare_and_write": false, 00:25:47.185 "abort": false, 00:25:47.185 "seek_hole": true, 00:25:47.185 "seek_data": true, 00:25:47.185 "copy": false, 00:25:47.185 "nvme_iov_md": false 00:25:47.185 }, 00:25:47.185 "driver_specific": { 00:25:47.185 "lvol": { 00:25:47.185 "lvol_store_uuid": "a88a7448-de78-419a-9cbb-10532c28d868", 00:25:47.185 "base_bdev": "nvme0n1", 00:25:47.185 "thin_provision": true, 00:25:47.185 "num_allocated_clusters": 0, 00:25:47.185 "snapshot": false, 00:25:47.185 "clone": false, 00:25:47.185 "esnap_clone": false 00:25:47.185 } 00:25:47.185 } 00:25:47.185 } 00:25:47.185 ]' 00:25:47.185 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 --l2p_dram_limit 10' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:47.444 07:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 613ed6b3-acb1-4bbd-b21e-9ce21b4271d1 --l2p_dram_limit 10 -c nvc0n1p0 00:25:47.445 [2024-11-29 07:57:37.363487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.363525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.445 [2024-11-29 07:57:37.363538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:47.445 [2024-11-29 07:57:37.363544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.363590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.363597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.445 [2024-11-29 07:57:37.363605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:47.445 [2024-11-29 07:57:37.363611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.363627] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.445 [2024-11-29 07:57:37.364159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.445 [2024-11-29 07:57:37.364180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.364186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.445 [2024-11-29 07:57:37.364194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:25:47.445 [2024-11-29 07:57:37.364200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.364252] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b19a4395-9893-474f-acb4-ce9ded40bf2e 00:25:47.445 [2024-11-29 07:57:37.365200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.365225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:47.445 [2024-11-29 07:57:37.365233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:47.445 [2024-11-29 07:57:37.365242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.369796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.369826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.445 [2024-11-29 07:57:37.369834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.524 ms 00:25:47.445 [2024-11-29 07:57:37.369841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.369906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.369915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.445 [2024-11-29 07:57:37.369921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:47.445 [2024-11-29 07:57:37.369931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.369963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.369972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.445 [2024-11-29 07:57:37.369980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:47.445 [2024-11-29 07:57:37.369987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.370004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.445 [2024-11-29 07:57:37.372928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.372956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.445 [2024-11-29 07:57:37.372967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:25:47.445 [2024-11-29 07:57:37.372973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.373000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.373006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.445 [2024-11-29 07:57:37.373014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.445 [2024-11-29 07:57:37.373020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.373033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:47.445 [2024-11-29 07:57:37.373137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.445 [2024-11-29 07:57:37.373149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.445 [2024-11-29 07:57:37.373158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.445 [2024-11-29 07:57:37.373167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373181] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.445 [2024-11-29 07:57:37.373188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.445 [2024-11-29 07:57:37.373195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.445 [2024-11-29 07:57:37.373200] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.445 [2024-11-29 07:57:37.373207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.373217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.445 [2024-11-29 07:57:37.373225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:25:47.445 [2024-11-29 07:57:37.373230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.373296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.445 [2024-11-29 07:57:37.373302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.445 [2024-11-29 07:57:37.373310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:47.445 [2024-11-29 07:57:37.373315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.445 [2024-11-29 07:57:37.373393] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.445 [2024-11-29 07:57:37.373406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.445 [2024-11-29 07:57:37.373414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.445 [2024-11-29 07:57:37.373433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.445 [2024-11-29 07:57:37.373463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.445 [2024-11-29 07:57:37.373474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.445 [2024-11-29 07:57:37.373479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.445 [2024-11-29 07:57:37.373486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.445 [2024-11-29 07:57:37.373491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.445 [2024-11-29 07:57:37.373499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.445 [2024-11-29 07:57:37.373504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.445 [2024-11-29 07:57:37.373517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.445 [2024-11-29 07:57:37.373537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.445 [2024-11-29 07:57:37.373553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.445 [2024-11-29 07:57:37.373571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.445 [2024-11-29 07:57:37.373588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.445 [2024-11-29 07:57:37.373607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.445 [2024-11-29 07:57:37.373618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.445 [2024-11-29 07:57:37.373623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.445 [2024-11-29 07:57:37.373630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.445 [2024-11-29 07:57:37.373635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.445 [2024-11-29 07:57:37.373641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.445 [2024-11-29 07:57:37.373646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.445 [2024-11-29 07:57:37.373658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.445 [2024-11-29 07:57:37.373664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.445 [2024-11-29 07:57:37.373669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.445 [2024-11-29 07:57:37.373676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.445 [2024-11-29 07:57:37.373682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.445 [2024-11-29 07:57:37.373690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.446 [2024-11-29 07:57:37.373697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.446 [2024-11-29 07:57:37.373705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.446 [2024-11-29 07:57:37.373710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.446 [2024-11-29 07:57:37.373717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.446 [2024-11-29 07:57:37.373721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.446 [2024-11-29 07:57:37.373728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.446 [2024-11-29 07:57:37.373735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.446 [2024-11-29 07:57:37.373745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.446 [2024-11-29 07:57:37.373759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.446 [2024-11-29 07:57:37.373764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.446 [2024-11-29 07:57:37.373771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.446 [2024-11-29 07:57:37.373776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.446 [2024-11-29 07:57:37.373782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.446 [2024-11-29 07:57:37.373788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.446 [2024-11-29 07:57:37.373794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.446 [2024-11-29 07:57:37.373800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.446 [2024-11-29 07:57:37.373808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.446 [2024-11-29 07:57:37.373838] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.446 [2024-11-29 07:57:37.373846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.446 [2024-11-29 07:57:37.373858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.446 [2024-11-29 07:57:37.373864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.446 [2024-11-29 07:57:37.373872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.446 [2024-11-29 07:57:37.373878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.446 [2024-11-29 07:57:37.373885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.446 [2024-11-29 07:57:37.373891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:25:47.446 [2024-11-29 07:57:37.373897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.446 [2024-11-29 07:57:37.373937] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:47.446 [2024-11-29 07:57:37.373948] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:51.648 [2024-11-29 07:57:40.898815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.898909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:51.648 [2024-11-29 07:57:40.898927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3524.862 ms 00:25:51.648 [2024-11-29 07:57:40.898940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.931075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.931145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.648 [2024-11-29 07:57:40.931161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.894 ms 00:25:51.648 [2024-11-29 07:57:40.931171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.931313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.931327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:51.648 [2024-11-29 07:57:40.931338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:51.648 [2024-11-29 07:57:40.931353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.966194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.966250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.648 [2024-11-29 07:57:40.966262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.804 ms 00:25:51.648 [2024-11-29 07:57:40.966272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.966312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.966322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.648 [2024-11-29 07:57:40.966332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:51.648 [2024-11-29 07:57:40.966349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.966916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.966959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.648 [2024-11-29 07:57:40.966970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:25:51.648 [2024-11-29 07:57:40.966982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.967098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.967114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.648 [2024-11-29 07:57:40.967123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:51.648 [2024-11-29 07:57:40.967136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:40.984433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:40.984497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.648 [2024-11-29 07:57:40.984509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.278 ms 00:25:51.648 [2024-11-29 07:57:40.984519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.010922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:51.648 [2024-11-29 07:57:41.014937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.014986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.648 [2024-11-29 07:57:41.015004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.333 ms 00:25:51.648 [2024-11-29 07:57:41.015014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.109946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.110017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:51.648 [2024-11-29 07:57:41.110037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.878 ms 00:25:51.648 [2024-11-29 07:57:41.110047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.110269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.110283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.648 [2024-11-29 07:57:41.110298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:25:51.648 [2024-11-29 07:57:41.110306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.136848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.136900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:51.648 [2024-11-29 07:57:41.136918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.481 ms 00:25:51.648 [2024-11-29 07:57:41.136926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.162005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.162054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:51.648 [2024-11-29 07:57:41.162070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.015 ms 00:25:51.648 [2024-11-29 07:57:41.162078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.162751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.162780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.648 [2024-11-29 07:57:41.162795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:25:51.648 [2024-11-29 07:57:41.162803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.246198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.246253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:51.648 [2024-11-29 07:57:41.246274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.345 ms 00:25:51.648 [2024-11-29 07:57:41.246284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.274331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.274380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:51.648 [2024-11-29 07:57:41.274396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.943 ms 00:25:51.648 [2024-11-29 07:57:41.274406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.300411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.300468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:51.648 [2024-11-29 07:57:41.300484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.932 ms 00:25:51.648 [2024-11-29 07:57:41.300492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.327078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.327132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:51.648 [2024-11-29 07:57:41.327149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.527 ms 00:25:51.648 [2024-11-29 07:57:41.327157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.327218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.327228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:51.648 [2024-11-29 07:57:41.327243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:51.648 [2024-11-29 07:57:41.327251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.327351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-11-29 07:57:41.327366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:51.648 [2024-11-29 07:57:41.327378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:51.648 [2024-11-29 07:57:41.327386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-11-29 07:57:41.328652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3964.623 ms, result 0 00:25:51.648 { 00:25:51.648 "name": "ftl0", 00:25:51.648 "uuid": "b19a4395-9893-474f-acb4-ce9ded40bf2e" 00:25:51.648 } 00:25:51.648 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:51.648 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:51.648 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:51.648 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:51.648 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:51.909 /dev/nbd0 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:51.909 1+0 records in 00:25:51.909 1+0 records out 00:25:51.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579968 s, 7.1 MB/s 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:51.909 07:57:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:52.170 [2024-11-29 07:57:41.888177] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:52.170 [2024-11-29 07:57:41.888312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80386 ] 00:25:52.170 [2024-11-29 07:57:42.048079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.431 [2024-11-29 07:57:42.169353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.818  [2024-11-29T07:57:44.707Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-29T07:57:45.643Z] Copying: 380/1024 [MB] (190 MBps) [2024-11-29T07:57:46.578Z] Copying: 607/1024 [MB] (226 MBps) [2024-11-29T07:57:47.144Z] Copying: 861/1024 [MB] (254 MBps) [2024-11-29T07:57:47.713Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:25:57.769 00:25:57.769 07:57:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:59.765 07:57:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:59.765 [2024-11-29 07:57:49.658586] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:59.765 [2024-11-29 07:57:49.658687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80473 ] 00:26:00.027 [2024-11-29 07:57:49.812511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.027 [2024-11-29 07:57:49.906605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.408  [2024-11-29T07:57:52.287Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-29T07:57:53.222Z] Copying: 32/1024 [MB] (20 MBps) [2024-11-29T07:57:54.156Z] Copying: 55/1024 [MB] (22 MBps) [2024-11-29T07:57:55.531Z] Copying: 74/1024 [MB] (19 MBps) [2024-11-29T07:57:56.464Z] Copying: 93/1024 [MB] (18 MBps) [2024-11-29T07:57:57.395Z] Copying: 117/1024 [MB] (23 MBps) [2024-11-29T07:57:58.325Z] Copying: 150/1024 [MB] (32 MBps) [2024-11-29T07:57:59.255Z] Copying: 173/1024 [MB] (23 MBps) [2024-11-29T07:58:00.186Z] Copying: 206/1024 [MB] (32 MBps) [2024-11-29T07:58:01.557Z] Copying: 236/1024 [MB] (30 MBps) [2024-11-29T07:58:02.490Z] Copying: 260/1024 [MB] (24 MBps) [2024-11-29T07:58:03.425Z] Copying: 288/1024 [MB] (28 MBps) [2024-11-29T07:58:04.358Z] Copying: 317/1024 [MB] (28 MBps) [2024-11-29T07:58:05.293Z] Copying: 346/1024 [MB] (28 MBps) [2024-11-29T07:58:06.233Z] Copying: 377/1024 [MB] (31 MBps) [2024-11-29T07:58:07.167Z] Copying: 407/1024 [MB] (29 MBps) [2024-11-29T07:58:08.538Z] Copying: 439/1024 [MB] (32 MBps) [2024-11-29T07:58:09.469Z] Copying: 470/1024 [MB] (30 MBps) [2024-11-29T07:58:10.402Z] Copying: 505/1024 [MB] (35 MBps) [2024-11-29T07:58:11.335Z] Copying: 534/1024 [MB] (28 MBps) [2024-11-29T07:58:12.269Z] Copying: 564/1024 [MB] (29 MBps) [2024-11-29T07:58:13.203Z] Copying: 598/1024 [MB] (34 MBps) [2024-11-29T07:58:14.136Z] Copying: 623/1024 [MB] (24 MBps) [2024-11-29T07:58:15.508Z] Copying: 656/1024 [MB] (33 MBps) [2024-11-29T07:58:16.441Z] Copying: 691/1024 [MB] (35 MBps) [2024-11-29T07:58:17.374Z] Copying: 717/1024 [MB] (26 MBps) [2024-11-29T07:58:18.308Z] Copying: 739/1024 [MB] (22 MBps) [2024-11-29T07:58:19.242Z] Copying: 762/1024 [MB] (22 MBps) [2024-11-29T07:58:20.176Z] Copying: 793/1024 [MB] (31 MBps) [2024-11-29T07:58:21.170Z] Copying: 825/1024 [MB] (31 MBps) [2024-11-29T07:58:22.544Z] Copying: 853/1024 [MB] (27 MBps) [2024-11-29T07:58:23.478Z] Copying: 885/1024 [MB] (32 MBps) [2024-11-29T07:58:24.414Z] Copying: 914/1024 [MB] (29 MBps) [2024-11-29T07:58:25.347Z] Copying: 944/1024 [MB] (29 MBps) [2024-11-29T07:58:26.278Z] Copying: 974/1024 [MB] (30 MBps) [2024-11-29T07:58:26.844Z] Copying: 1007/1024 [MB] (32 MBps) [2024-11-29T07:58:27.411Z] Copying: 1024/1024 [MB] (average 28 MBps) 00:26:37.467 00:26:37.467 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:37.467 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:37.467 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:37.728 [2024-11-29 07:58:27.535648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.728 [2024-11-29 07:58:27.535783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:37.728 [2024-11-29 07:58:27.535802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:37.728 [2024-11-29 07:58:27.535812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.535835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:37.729 [2024-11-29 07:58:27.538055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.538084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:37.729 [2024-11-29 07:58:27.538095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:26:37.729 [2024-11-29 07:58:27.538101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.540053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.540080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:37.729 [2024-11-29 07:58:27.540090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:26:37.729 [2024-11-29 07:58:27.540097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.557157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.557191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:37.729 [2024-11-29 07:58:27.557203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.039 ms 00:26:37.729 [2024-11-29 07:58:27.557209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.561871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.561894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:37.729 [2024-11-29 07:58:27.561906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:26:37.729 [2024-11-29 07:58:27.561912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.581461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.581580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:37.729 [2024-11-29 07:58:27.581597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:26:37.729 [2024-11-29 07:58:27.581604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.594681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.594708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:37.729 [2024-11-29 07:58:27.594722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.047 ms 00:26:37.729 [2024-11-29 07:58:27.594729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.594845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.594854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:37.729 [2024-11-29 07:58:27.594872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:37.729 [2024-11-29 07:58:27.594879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.613525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.613549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:37.729 [2024-11-29 07:58:27.613559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.630 ms 00:26:37.729 [2024-11-29 07:58:27.613566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.631967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.631991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:37.729 [2024-11-29 07:58:27.632000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.372 ms 00:26:37.729 [2024-11-29 07:58:27.632006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.649454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.649478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:37.729 [2024-11-29 07:58:27.649488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.416 ms 00:26:37.729 [2024-11-29 07:58:27.649494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.666963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.729 [2024-11-29 07:58:27.666986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:37.729 [2024-11-29 07:58:27.666995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.410 ms 00:26:37.729 [2024-11-29 07:58:27.667001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.729 [2024-11-29 07:58:27.667030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:37.729 [2024-11-29 07:58:27.667041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:37.729 [2024-11-29 07:58:27.667398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:37.730 [2024-11-29 07:58:27.667775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:37.730 [2024-11-29 07:58:27.667783] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b19a4395-9893-474f-acb4-ce9ded40bf2e 00:26:37.730 [2024-11-29 07:58:27.667789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:37.730 [2024-11-29 07:58:27.667799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:37.730 [2024-11-29 07:58:27.667814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:37.730 [2024-11-29 07:58:27.667822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:37.730 [2024-11-29 07:58:27.667828] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:37.730 [2024-11-29 07:58:27.667835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:37.730 [2024-11-29 07:58:27.667841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:37.730 [2024-11-29 07:58:27.667847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:37.730 [2024-11-29 07:58:27.667852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:37.730 [2024-11-29 07:58:27.667859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.730 [2024-11-29 07:58:27.667865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:37.730 [2024-11-29 07:58:27.667873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:26:37.730 [2024-11-29 07:58:27.667879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.991 [2024-11-29 07:58:27.678008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.991 [2024-11-29 07:58:27.678031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:37.991 [2024-11-29 07:58:27.678041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.104 ms 00:26:37.991 [2024-11-29 07:58:27.678047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.991 [2024-11-29 07:58:27.678338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.991 [2024-11-29 07:58:27.678345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:37.991 [2024-11-29 07:58:27.678353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:26:37.991 [2024-11-29 07:58:27.678359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.991 [2024-11-29 07:58:27.713199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.991 [2024-11-29 07:58:27.713228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.991 [2024-11-29 07:58:27.713239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.991 [2024-11-29 07:58:27.713245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.991 [2024-11-29 07:58:27.713297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.991 [2024-11-29 07:58:27.713304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.991 [2024-11-29 07:58:27.713312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.991 [2024-11-29 07:58:27.713318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.991 [2024-11-29 07:58:27.713375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.991 [2024-11-29 07:58:27.713386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.992 [2024-11-29 07:58:27.713394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.713401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.713419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.713426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.992 [2024-11-29 07:58:27.713433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.713439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.775643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.775679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.992 [2024-11-29 07:58:27.775692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.775698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.992 [2024-11-29 07:58:27.827238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.992 [2024-11-29 07:58:27.827376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.992 [2024-11-29 07:58:27.827440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.992 [2024-11-29 07:58:27.827557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:37.992 [2024-11-29 07:58:27.827612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.992 [2024-11-29 07:58:27.827673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.992 [2024-11-29 07:58:27.827735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.992 [2024-11-29 07:58:27.827743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.992 [2024-11-29 07:58:27.827749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.992 [2024-11-29 07:58:27.827872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 292.185 ms, result 0 00:26:37.992 true 00:26:37.992 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80244 00:26:37.992 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80244 00:26:37.992 07:58:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:37.992 [2024-11-29 07:58:27.915726] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:37.992 [2024-11-29 07:58:27.915836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80872 ] 00:26:38.253 [2024-11-29 07:58:28.070172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.253 [2024-11-29 07:58:28.162071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.637  [2024-11-29T07:58:30.531Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-29T07:58:31.474Z] Copying: 510/1024 [MB] (255 MBps) [2024-11-29T07:58:32.413Z] Copying: 764/1024 [MB] (254 MBps) [2024-11-29T07:58:32.413Z] Copying: 1017/1024 [MB] (252 MBps) [2024-11-29T07:58:33.356Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:26:43.412 00:26:43.412 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80244 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:43.412 07:58:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:43.412 [2024-11-29 07:58:33.185901] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:43.412 [2024-11-29 07:58:33.186048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80926 ] 00:26:43.412 [2024-11-29 07:58:33.350607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.673 [2024-11-29 07:58:33.471372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.934 [2024-11-29 07:58:33.776293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:43.934 [2024-11-29 07:58:33.776376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:43.934 [2024-11-29 07:58:33.842359] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:43.934 [2024-11-29 07:58:33.842971] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:43.934 [2024-11-29 07:58:33.843532] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:44.510 [2024-11-29 07:58:34.221360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.221418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:44.510 [2024-11-29 07:58:34.221434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.510 [2024-11-29 07:58:34.221460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.221520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.221531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.510 [2024-11-29 07:58:34.221540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:44.510 [2024-11-29 07:58:34.221548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.221569] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:44.510 [2024-11-29 07:58:34.222325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:44.510 [2024-11-29 07:58:34.222346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.222354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.510 [2024-11-29 07:58:34.222364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:26:44.510 [2024-11-29 07:58:34.222372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.224263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:44.510 [2024-11-29 07:58:34.240387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.240439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:44.510 [2024-11-29 07:58:34.240476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:26:44.510 [2024-11-29 07:58:34.240487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.240581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.240592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:44.510 [2024-11-29 07:58:34.240602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:44.510 [2024-11-29 07:58:34.240609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.249355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.249400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.510 [2024-11-29 07:58:34.249412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.656 ms 00:26:44.510 [2024-11-29 07:58:34.249420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.249530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.249541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.510 [2024-11-29 07:58:34.249551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:44.510 [2024-11-29 07:58:34.249558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.249618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.249629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:44.510 [2024-11-29 07:58:34.249637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:44.510 [2024-11-29 07:58:34.249645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.249672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:44.510 [2024-11-29 07:58:34.253731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.253764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.510 [2024-11-29 07:58:34.253774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:26:44.510 [2024-11-29 07:58:34.253783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.253820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.253829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:44.510 [2024-11-29 07:58:34.253838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:44.510 [2024-11-29 07:58:34.253846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.253906] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:44.510 [2024-11-29 07:58:34.253930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:44.510 [2024-11-29 07:58:34.253968] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:44.510 [2024-11-29 07:58:34.253986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:44.510 [2024-11-29 07:58:34.254092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:44.510 [2024-11-29 07:58:34.254103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:44.510 [2024-11-29 07:58:34.254114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:44.510 [2024-11-29 07:58:34.254129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:44.510 [2024-11-29 07:58:34.254140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:44.510 [2024-11-29 07:58:34.254148] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:44.510 [2024-11-29 07:58:34.254156] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:44.510 [2024-11-29 07:58:34.254164] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:44.510 [2024-11-29 07:58:34.254171] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:44.510 [2024-11-29 07:58:34.254179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.510 [2024-11-29 07:58:34.254187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:44.510 [2024-11-29 07:58:34.254195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:26:44.510 [2024-11-29 07:58:34.254202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.510 [2024-11-29 07:58:34.254286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.254306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:44.511 [2024-11-29 07:58:34.254314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:44.511 [2024-11-29 07:58:34.254322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.254428] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:44.511 [2024-11-29 07:58:34.254439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:44.511 [2024-11-29 07:58:34.254464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:44.511 [2024-11-29 07:58:34.254489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:44.511 [2024-11-29 07:58:34.254515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.511 [2024-11-29 07:58:34.254536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:44.511 [2024-11-29 07:58:34.254544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:44.511 [2024-11-29 07:58:34.254551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.511 [2024-11-29 07:58:34.254559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:44.511 [2024-11-29 07:58:34.254566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:44.511 [2024-11-29 07:58:34.254573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:44.511 [2024-11-29 07:58:34.254588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:44.511 [2024-11-29 07:58:34.254609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:44.511 [2024-11-29 07:58:34.254630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:44.511 [2024-11-29 07:58:34.254651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:44.511 [2024-11-29 07:58:34.254671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:44.511 [2024-11-29 07:58:34.254691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.511 [2024-11-29 07:58:34.254707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:44.511 [2024-11-29 07:58:34.254713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:44.511 [2024-11-29 07:58:34.254721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.511 [2024-11-29 07:58:34.254727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:44.511 [2024-11-29 07:58:34.254734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:44.511 [2024-11-29 07:58:34.254742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:44.511 [2024-11-29 07:58:34.254757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:44.511 [2024-11-29 07:58:34.254763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254770] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:44.511 [2024-11-29 07:58:34.254779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:44.511 [2024-11-29 07:58:34.254789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.511 [2024-11-29 07:58:34.254805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:44.511 [2024-11-29 07:58:34.254812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:44.511 [2024-11-29 07:58:34.254819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:44.511 [2024-11-29 07:58:34.254825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:44.511 [2024-11-29 07:58:34.254832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:44.511 [2024-11-29 07:58:34.254839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:44.511 [2024-11-29 07:58:34.254849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:44.511 [2024-11-29 07:58:34.254858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:44.511 [2024-11-29 07:58:34.254873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:44.511 [2024-11-29 07:58:34.254880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:44.511 [2024-11-29 07:58:34.254888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:44.511 [2024-11-29 07:58:34.254896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:44.511 [2024-11-29 07:58:34.254904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:44.511 [2024-11-29 07:58:34.254912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:44.511 [2024-11-29 07:58:34.254919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:44.511 [2024-11-29 07:58:34.254926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:44.511 [2024-11-29 07:58:34.254935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:44.511 [2024-11-29 07:58:34.254973] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:44.511 [2024-11-29 07:58:34.254981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:44.511 [2024-11-29 07:58:34.254998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:44.511 [2024-11-29 07:58:34.255006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:44.511 [2024-11-29 07:58:34.255014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:44.511 [2024-11-29 07:58:34.255021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.255029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:44.511 [2024-11-29 07:58:34.255037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:26:44.511 [2024-11-29 07:58:34.255045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.287609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.287659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.511 [2024-11-29 07:58:34.287673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.516 ms 00:26:44.511 [2024-11-29 07:58:34.287682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.287787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.287797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:44.511 [2024-11-29 07:58:34.287807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:44.511 [2024-11-29 07:58:34.287815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.340969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.341029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.511 [2024-11-29 07:58:34.341048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.080 ms 00:26:44.511 [2024-11-29 07:58:34.341057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.341128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.511 [2024-11-29 07:58:34.341139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.511 [2024-11-29 07:58:34.341148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:44.511 [2024-11-29 07:58:34.341156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.511 [2024-11-29 07:58:34.341841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.341877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.512 [2024-11-29 07:58:34.341888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:26:44.512 [2024-11-29 07:58:34.341905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.342073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.342092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.512 [2024-11-29 07:58:34.342103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:26:44.512 [2024-11-29 07:58:34.342111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.358967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.359028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.512 [2024-11-29 07:58:34.359047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.833 ms 00:26:44.512 [2024-11-29 07:58:34.359062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.377909] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:44.512 [2024-11-29 07:58:34.377950] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:44.512 [2024-11-29 07:58:34.377968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.377982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:44.512 [2024-11-29 07:58:34.377996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.713 ms 00:26:44.512 [2024-11-29 07:58:34.378008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.409808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.409850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:44.512 [2024-11-29 07:58:34.409861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.743 ms 00:26:44.512 [2024-11-29 07:58:34.409869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.421435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.421474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:44.512 [2024-11-29 07:58:34.421485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.514 ms 00:26:44.512 [2024-11-29 07:58:34.421492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.436701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.436764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:44.512 [2024-11-29 07:58:34.436780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.171 ms 00:26:44.512 [2024-11-29 07:58:34.436791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.512 [2024-11-29 07:58:34.437780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.512 [2024-11-29 07:58:34.437820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:44.512 [2024-11-29 07:58:34.437834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:26:44.512 [2024-11-29 07:58:34.437845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.774 [2024-11-29 07:58:34.510309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.510362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:44.775 [2024-11-29 07:58:34.510376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.435 ms 00:26:44.775 [2024-11-29 07:58:34.510384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.521074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:44.775 [2024-11-29 07:58:34.523747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.523780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:44.775 [2024-11-29 07:58:34.523795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.289 ms 00:26:44.775 [2024-11-29 07:58:34.523808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.523904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.523916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:44.775 [2024-11-29 07:58:34.523925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:44.775 [2024-11-29 07:58:34.523932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.523998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.524008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:44.775 [2024-11-29 07:58:34.524016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:44.775 [2024-11-29 07:58:34.524024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.524045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.524054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:44.775 [2024-11-29 07:58:34.524062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.775 [2024-11-29 07:58:34.524070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.524101] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:44.775 [2024-11-29 07:58:34.524111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.524119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:44.775 [2024-11-29 07:58:34.524127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:44.775 [2024-11-29 07:58:34.524137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.548482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.548518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:44.775 [2024-11-29 07:58:34.548530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.326 ms 00:26:44.775 [2024-11-29 07:58:34.548538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.548617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.775 [2024-11-29 07:58:34.548627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:44.775 [2024-11-29 07:58:34.548636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:44.775 [2024-11-29 07:58:34.548644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.775 [2024-11-29 07:58:34.549632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.835 ms, result 0 00:26:45.730  [2024-11-29T07:58:36.615Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T07:58:38.002Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-29T07:58:38.577Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-29T07:58:39.966Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-29T07:58:40.912Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-29T07:58:41.858Z] Copying: 64736/1048576 [kB] (10176 kBps) [2024-11-29T07:58:42.801Z] Copying: 74888/1048576 [kB] (10152 kBps) [2024-11-29T07:58:43.747Z] Copying: 85124/1048576 [kB] (10236 kBps) [2024-11-29T07:58:44.693Z] Copying: 93/1024 [MB] (10 MBps) [2024-11-29T07:58:45.652Z] Copying: 105624/1048576 [kB] (10128 kBps) [2024-11-29T07:58:46.596Z] Copying: 113/1024 [MB] (10 MBps) [2024-11-29T07:58:47.980Z] Copying: 124/1024 [MB] (10 MBps) [2024-11-29T07:58:48.922Z] Copying: 134/1024 [MB] (10 MBps) [2024-11-29T07:58:49.867Z] Copying: 147/1024 [MB] (12 MBps) [2024-11-29T07:58:50.810Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-29T07:58:51.846Z] Copying: 171824/1048576 [kB] (9952 kBps) [2024-11-29T07:58:52.791Z] Copying: 181728/1048576 [kB] (9904 kBps) [2024-11-29T07:58:53.736Z] Copying: 191824/1048576 [kB] (10096 kBps) [2024-11-29T07:58:54.681Z] Copying: 201440/1048576 [kB] (9616 kBps) [2024-11-29T07:58:55.620Z] Copying: 211560/1048576 [kB] (10120 kBps) [2024-11-29T07:58:56.563Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-29T07:58:57.952Z] Copying: 227/1024 [MB] (10 MBps) [2024-11-29T07:58:58.900Z] Copying: 237/1024 [MB] (10 MBps) [2024-11-29T07:58:59.842Z] Copying: 248/1024 [MB] (10 MBps) [2024-11-29T07:59:00.785Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-29T07:59:01.732Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-29T07:59:02.674Z] Copying: 290/1024 [MB] (14 MBps) [2024-11-29T07:59:03.615Z] Copying: 302/1024 [MB] (12 MBps) [2024-11-29T07:59:05.000Z] Copying: 319/1024 [MB] (16 MBps) [2024-11-29T07:59:05.575Z] Copying: 332/1024 [MB] (13 MBps) [2024-11-29T07:59:06.961Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-29T07:59:07.906Z] Copying: 361/1024 [MB] (17 MBps) [2024-11-29T07:59:08.852Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-29T07:59:09.798Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-29T07:59:10.742Z] Copying: 399/1024 [MB] (12 MBps) [2024-11-29T07:59:11.688Z] Copying: 416/1024 [MB] (17 MBps) [2024-11-29T07:59:12.632Z] Copying: 436/1024 [MB] (19 MBps) [2024-11-29T07:59:13.576Z] Copying: 454/1024 [MB] (18 MBps) [2024-11-29T07:59:14.954Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-29T07:59:15.893Z] Copying: 492/1024 [MB] (20 MBps) [2024-11-29T07:59:16.833Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-29T07:59:17.772Z] Copying: 516/1024 [MB] (12 MBps) [2024-11-29T07:59:18.716Z] Copying: 534/1024 [MB] (17 MBps) [2024-11-29T07:59:19.660Z] Copying: 549/1024 [MB] (15 MBps) [2024-11-29T07:59:20.605Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-29T07:59:22.012Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-29T07:59:22.585Z] Copying: 595496/1048576 [kB] (10164 kBps) [2024-11-29T07:59:23.595Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-29T07:59:24.996Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-29T07:59:25.567Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-29T07:59:26.951Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-29T07:59:27.897Z] Copying: 647/1024 [MB] (24 MBps) [2024-11-29T07:59:28.842Z] Copying: 673/1024 [MB] (26 MBps) [2024-11-29T07:59:29.786Z] Copying: 695/1024 [MB] (21 MBps) [2024-11-29T07:59:30.730Z] Copying: 717/1024 [MB] (21 MBps) [2024-11-29T07:59:31.673Z] Copying: 732/1024 [MB] (15 MBps) [2024-11-29T07:59:32.620Z] Copying: 744/1024 [MB] (12 MBps) [2024-11-29T07:59:33.564Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-29T07:59:34.954Z] Copying: 770/1024 [MB] (14 MBps) [2024-11-29T07:59:35.897Z] Copying: 799212/1048576 [kB] (10184 kBps) [2024-11-29T07:59:36.840Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-29T07:59:37.784Z] Copying: 806/1024 [MB] (15 MBps) [2024-11-29T07:59:38.728Z] Copying: 819/1024 [MB] (13 MBps) [2024-11-29T07:59:39.673Z] Copying: 840/1024 [MB] (21 MBps) [2024-11-29T07:59:40.615Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-29T07:59:42.005Z] Copying: 877/1024 [MB] (19 MBps) [2024-11-29T07:59:42.579Z] Copying: 894/1024 [MB] (16 MBps) [2024-11-29T07:59:43.966Z] Copying: 911/1024 [MB] (16 MBps) [2024-11-29T07:59:44.912Z] Copying: 928/1024 [MB] (17 MBps) [2024-11-29T07:59:45.854Z] Copying: 940/1024 [MB] (12 MBps) [2024-11-29T07:59:46.797Z] Copying: 953/1024 [MB] (12 MBps) [2024-11-29T07:59:47.742Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-29T07:59:48.689Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-29T07:59:49.632Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-29T07:59:50.569Z] Copying: 1000/1024 [MB] (16 MBps) [2024-11-29T07:59:51.143Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-29T07:59:51.143Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-29 07:59:50.927002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:50.927083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:01.199 [2024-11-29 07:59:50.927101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:01.199 [2024-11-29 07:59:50.927111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:50.928335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.199 [2024-11-29 07:59:50.932740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:50.932782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:01.199 [2024-11-29 07:59:50.932794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.369 ms 00:28:01.199 [2024-11-29 07:59:50.932811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:50.946362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:50.946414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:01.199 [2024-11-29 07:59:50.946426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.300 ms 00:28:01.199 [2024-11-29 07:59:50.946435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:50.969955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:50.970007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:01.199 [2024-11-29 07:59:50.970019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.491 ms 00:28:01.199 [2024-11-29 07:59:50.970027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:50.976195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:50.976237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:01.199 [2024-11-29 07:59:50.976250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:28:01.199 [2024-11-29 07:59:50.976258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:51.002785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:51.002838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:01.199 [2024-11-29 07:59:51.002852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.465 ms 00:28:01.199 [2024-11-29 07:59:51.002861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.199 [2024-11-29 07:59:51.018808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.199 [2024-11-29 07:59:51.018860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:01.199 [2024-11-29 07:59:51.018874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.900 ms 00:28:01.199 [2024-11-29 07:59:51.018883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.461 [2024-11-29 07:59:51.326145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.462 [2024-11-29 07:59:51.326208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:01.462 [2024-11-29 07:59:51.326230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 307.210 ms 00:28:01.462 [2024-11-29 07:59:51.326240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.462 [2024-11-29 07:59:51.352031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.462 [2024-11-29 07:59:51.352082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:01.462 [2024-11-29 07:59:51.352095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.774 ms 00:28:01.462 [2024-11-29 07:59:51.352114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.462 [2024-11-29 07:59:51.377408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.462 [2024-11-29 07:59:51.377475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:01.462 [2024-11-29 07:59:51.377488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.251 ms 00:28:01.462 [2024-11-29 07:59:51.377495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.462 [2024-11-29 07:59:51.401886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.462 [2024-11-29 07:59:51.401934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:01.462 [2024-11-29 07:59:51.401947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.348 ms 00:28:01.462 [2024-11-29 07:59:51.401954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.725 [2024-11-29 07:59:51.426703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.725 [2024-11-29 07:59:51.426752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:01.725 [2024-11-29 07:59:51.426764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.681 ms 00:28:01.725 [2024-11-29 07:59:51.426771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.725 [2024-11-29 07:59:51.426814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:01.725 [2024-11-29 07:59:51.426829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111360 / 261120 wr_cnt: 1 state: open 00:28:01.725 [2024-11-29 07:59:51.426841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.426996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:01.725 [2024-11-29 07:59:51.427282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:01.726 [2024-11-29 07:59:51.427636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:01.726 [2024-11-29 07:59:51.427645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b19a4395-9893-474f-acb4-ce9ded40bf2e 00:28:01.726 [2024-11-29 07:59:51.427666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111360 00:28:01.726 [2024-11-29 07:59:51.427674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112320 00:28:01.726 [2024-11-29 07:59:51.427681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111360 00:28:01.726 [2024-11-29 07:59:51.427691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:28:01.726 [2024-11-29 07:59:51.427699] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:01.726 [2024-11-29 07:59:51.427707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:01.726 [2024-11-29 07:59:51.427714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:01.726 [2024-11-29 07:59:51.427721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:01.726 [2024-11-29 07:59:51.427728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:01.726 [2024-11-29 07:59:51.427736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.726 [2024-11-29 07:59:51.427744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:01.726 [2024-11-29 07:59:51.427753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:28:01.726 [2024-11-29 07:59:51.427762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.441221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.726 [2024-11-29 07:59:51.441262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:01.726 [2024-11-29 07:59:51.441274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.441 ms 00:28:01.726 [2024-11-29 07:59:51.441282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.441705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.726 [2024-11-29 07:59:51.441730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:01.726 [2024-11-29 07:59:51.441748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:28:01.726 [2024-11-29 07:59:51.441756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.478016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.478068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.726 [2024-11-29 07:59:51.478079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.478088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.478154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.478163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.726 [2024-11-29 07:59:51.478178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.478186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.478266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.478278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.726 [2024-11-29 07:59:51.478288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.478295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.478310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.478319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.726 [2024-11-29 07:59:51.478327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.478335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.561248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.561307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.726 [2024-11-29 07:59:51.561321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.561330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.629128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.629185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.726 [2024-11-29 07:59:51.629199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.629214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.629273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.629283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.726 [2024-11-29 07:59:51.629292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.629301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.629359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.629370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.726 [2024-11-29 07:59:51.629379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.629388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.629705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.629727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.726 [2024-11-29 07:59:51.629737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.726 [2024-11-29 07:59:51.629745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.726 [2024-11-29 07:59:51.629779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.726 [2024-11-29 07:59:51.629788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:01.727 [2024-11-29 07:59:51.629797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.727 [2024-11-29 07:59:51.629805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.727 [2024-11-29 07:59:51.629850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.727 [2024-11-29 07:59:51.629860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.727 [2024-11-29 07:59:51.629870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.727 [2024-11-29 07:59:51.629877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.727 [2024-11-29 07:59:51.629928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.727 [2024-11-29 07:59:51.629945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.727 [2024-11-29 07:59:51.629954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.727 [2024-11-29 07:59:51.629963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.727 [2024-11-29 07:59:51.630103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 705.865 ms, result 0 00:28:03.115 00:28:03.115 00:28:03.115 07:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:05.736 07:59:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:05.736 [2024-11-29 07:59:55.131686] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:05.736 [2024-11-29 07:59:55.131777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81755 ] 00:28:05.736 [2024-11-29 07:59:55.287550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.736 [2024-11-29 07:59:55.393983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.999 [2024-11-29 07:59:55.689519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.999 [2024-11-29 07:59:55.689605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.999 [2024-11-29 07:59:55.851572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.851631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:05.999 [2024-11-29 07:59:55.851646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:05.999 [2024-11-29 07:59:55.851655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.851709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.851723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.999 [2024-11-29 07:59:55.851732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:05.999 [2024-11-29 07:59:55.851741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.851761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:05.999 [2024-11-29 07:59:55.852441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:05.999 [2024-11-29 07:59:55.852489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.852497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.999 [2024-11-29 07:59:55.852507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:28:05.999 [2024-11-29 07:59:55.852514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.854186] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:05.999 [2024-11-29 07:59:55.868717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.868766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:05.999 [2024-11-29 07:59:55.868779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:28:05.999 [2024-11-29 07:59:55.868788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.868866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.868899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:05.999 [2024-11-29 07:59:55.868908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:05.999 [2024-11-29 07:59:55.868916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.876856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.876908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.999 [2024-11-29 07:59:55.876919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.862 ms 00:28:05.999 [2024-11-29 07:59:55.876933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.877013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.877023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.999 [2024-11-29 07:59:55.877032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:05.999 [2024-11-29 07:59:55.877040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.877084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.877094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.999 [2024-11-29 07:59:55.877102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:05.999 [2024-11-29 07:59:55.877109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.877136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:05.999 [2024-11-29 07:59:55.881301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.881338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.999 [2024-11-29 07:59:55.881352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:28:05.999 [2024-11-29 07:59:55.881360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.881394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.881403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.999 [2024-11-29 07:59:55.881412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:05.999 [2024-11-29 07:59:55.881420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.999 [2024-11-29 07:59:55.881484] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:05.999 [2024-11-29 07:59:55.881508] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:05.999 [2024-11-29 07:59:55.881546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:05.999 [2024-11-29 07:59:55.881565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:05.999 [2024-11-29 07:59:55.881676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.999 [2024-11-29 07:59:55.881687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.999 [2024-11-29 07:59:55.881698] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:05.999 [2024-11-29 07:59:55.881710] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.999 [2024-11-29 07:59:55.881720] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.999 [2024-11-29 07:59:55.881728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:05.999 [2024-11-29 07:59:55.881736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.999 [2024-11-29 07:59:55.881747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.999 [2024-11-29 07:59:55.881755] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.999 [2024-11-29 07:59:55.881763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.999 [2024-11-29 07:59:55.881771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.999 [2024-11-29 07:59:55.881780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:28:05.999 [2024-11-29 07:59:55.881787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.000 [2024-11-29 07:59:55.881874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.000 [2024-11-29 07:59:55.881883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:06.000 [2024-11-29 07:59:55.881890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:06.000 [2024-11-29 07:59:55.881897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.000 [2024-11-29 07:59:55.882005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:06.000 [2024-11-29 07:59:55.882023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:06.000 [2024-11-29 07:59:55.882031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:06.000 [2024-11-29 07:59:55.882055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:06.000 [2024-11-29 07:59:55.882077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.000 [2024-11-29 07:59:55.882091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:06.000 [2024-11-29 07:59:55.882100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:06.000 [2024-11-29 07:59:55.882108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.000 [2024-11-29 07:59:55.882121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:06.000 [2024-11-29 07:59:55.882129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:06.000 [2024-11-29 07:59:55.882136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:06.000 [2024-11-29 07:59:55.882149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:06.000 [2024-11-29 07:59:55.882171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:06.000 [2024-11-29 07:59:55.882191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:06.000 [2024-11-29 07:59:55.882209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:06.000 [2024-11-29 07:59:55.882229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:06.000 [2024-11-29 07:59:55.882248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.000 [2024-11-29 07:59:55.882260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:06.000 [2024-11-29 07:59:55.882266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:06.000 [2024-11-29 07:59:55.882272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.000 [2024-11-29 07:59:55.882279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:06.000 [2024-11-29 07:59:55.882285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:06.000 [2024-11-29 07:59:55.882292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:06.000 [2024-11-29 07:59:55.882305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:06.000 [2024-11-29 07:59:55.882312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:06.000 [2024-11-29 07:59:55.882334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:06.000 [2024-11-29 07:59:55.882343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.000 [2024-11-29 07:59:55.882358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:06.000 [2024-11-29 07:59:55.882366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:06.000 [2024-11-29 07:59:55.882373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:06.000 [2024-11-29 07:59:55.882380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:06.000 [2024-11-29 07:59:55.882386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:06.000 [2024-11-29 07:59:55.882393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:06.000 [2024-11-29 07:59:55.882401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:06.000 [2024-11-29 07:59:55.882411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:06.000 [2024-11-29 07:59:55.882430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:06.000 [2024-11-29 07:59:55.882438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:06.000 [2024-11-29 07:59:55.882459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:06.000 [2024-11-29 07:59:55.882467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:06.000 [2024-11-29 07:59:55.882475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:06.000 [2024-11-29 07:59:55.882482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:06.000 [2024-11-29 07:59:55.882489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:06.000 [2024-11-29 07:59:55.882497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:06.000 [2024-11-29 07:59:55.882504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:06.000 [2024-11-29 07:59:55.882541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:06.000 [2024-11-29 07:59:55.882550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:06.000 [2024-11-29 07:59:55.882566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:06.000 [2024-11-29 07:59:55.882573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:06.000 [2024-11-29 07:59:55.882580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:06.000 [2024-11-29 07:59:55.882595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.000 [2024-11-29 07:59:55.882604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:06.000 [2024-11-29 07:59:55.882612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:28:06.000 [2024-11-29 07:59:55.882620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.000 [2024-11-29 07:59:55.914325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.000 [2024-11-29 07:59:55.914376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:06.000 [2024-11-29 07:59:55.914387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.661 ms 00:28:06.000 [2024-11-29 07:59:55.914399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.000 [2024-11-29 07:59:55.914501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.000 [2024-11-29 07:59:55.914512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:06.000 [2024-11-29 07:59:55.914521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:06.000 [2024-11-29 07:59:55.914529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.262 [2024-11-29 07:59:55.964215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.262 [2024-11-29 07:59:55.964269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.262 [2024-11-29 07:59:55.964282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.626 ms 00:28:06.262 [2024-11-29 07:59:55.964291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.262 [2024-11-29 07:59:55.964339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.262 [2024-11-29 07:59:55.964350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.262 [2024-11-29 07:59:55.964364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:06.262 [2024-11-29 07:59:55.964372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.262 [2024-11-29 07:59:55.965021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.262 [2024-11-29 07:59:55.965055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.262 [2024-11-29 07:59:55.965067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:28:06.263 [2024-11-29 07:59:55.965075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:55.965229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:55.965239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.263 [2024-11-29 07:59:55.965255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:06.263 [2024-11-29 07:59:55.965263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:55.980768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:55.980813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.263 [2024-11-29 07:59:55.980824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.485 ms 00:28:06.263 [2024-11-29 07:59:55.980833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:55.994842] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:06.263 [2024-11-29 07:59:55.994888] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:06.263 [2024-11-29 07:59:55.994902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:55.994910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:06.263 [2024-11-29 07:59:55.994919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.952 ms 00:28:06.263 [2024-11-29 07:59:55.994927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.020691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.020737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:06.263 [2024-11-29 07:59:56.020750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.712 ms 00:28:06.263 [2024-11-29 07:59:56.020758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.033593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.033639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:06.263 [2024-11-29 07:59:56.033649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:28:06.263 [2024-11-29 07:59:56.033657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.046341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.046390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:06.263 [2024-11-29 07:59:56.046402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.545 ms 00:28:06.263 [2024-11-29 07:59:56.046410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.047064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.047095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:06.263 [2024-11-29 07:59:56.047109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:28:06.263 [2024-11-29 07:59:56.047117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.111125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.111189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:06.263 [2024-11-29 07:59:56.111210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.989 ms 00:28:06.263 [2024-11-29 07:59:56.111220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.122146] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:06.263 [2024-11-29 07:59:56.125241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.125284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:06.263 [2024-11-29 07:59:56.125296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.963 ms 00:28:06.263 [2024-11-29 07:59:56.125304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.125389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.125401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:06.263 [2024-11-29 07:59:56.125413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:06.263 [2024-11-29 07:59:56.125422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.127195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.127239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:06.263 [2024-11-29 07:59:56.127249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:28:06.263 [2024-11-29 07:59:56.127258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.127291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.127301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:06.263 [2024-11-29 07:59:56.127311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:06.263 [2024-11-29 07:59:56.127319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.127361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:06.263 [2024-11-29 07:59:56.127372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.127380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:06.263 [2024-11-29 07:59:56.127389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:06.263 [2024-11-29 07:59:56.127397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.152941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.152992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:06.263 [2024-11-29 07:59:56.153010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:28:06.263 [2024-11-29 07:59:56.153019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.153108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.263 [2024-11-29 07:59:56.153118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:06.263 [2024-11-29 07:59:56.153127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:06.263 [2024-11-29 07:59:56.153136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.263 [2024-11-29 07:59:56.154376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.313 ms, result 0 00:28:07.651  [2024-11-29T07:59:58.536Z] Copying: 1000/1048576 [kB] (1000 kBps) [2024-11-29T07:59:59.477Z] Copying: 4728/1048576 [kB] (3728 kBps) [2024-11-29T08:00:00.417Z] Copying: 26/1024 [MB] (21 MBps) [2024-11-29T08:00:01.361Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-29T08:00:02.747Z] Copying: 85/1024 [MB] (31 MBps) [2024-11-29T08:00:03.686Z] Copying: 116/1024 [MB] (30 MBps) [2024-11-29T08:00:04.629Z] Copying: 147/1024 [MB] (31 MBps) [2024-11-29T08:00:05.567Z] Copying: 180/1024 [MB] (32 MBps) [2024-11-29T08:00:06.506Z] Copying: 209/1024 [MB] (29 MBps) [2024-11-29T08:00:07.451Z] Copying: 240/1024 [MB] (30 MBps) [2024-11-29T08:00:08.397Z] Copying: 270/1024 [MB] (30 MBps) [2024-11-29T08:00:09.341Z] Copying: 298/1024 [MB] (27 MBps) [2024-11-29T08:00:10.728Z] Copying: 327/1024 [MB] (29 MBps) [2024-11-29T08:00:11.671Z] Copying: 354/1024 [MB] (27 MBps) [2024-11-29T08:00:12.616Z] Copying: 384/1024 [MB] (29 MBps) [2024-11-29T08:00:13.560Z] Copying: 412/1024 [MB] (28 MBps) [2024-11-29T08:00:14.506Z] Copying: 443/1024 [MB] (30 MBps) [2024-11-29T08:00:15.452Z] Copying: 470/1024 [MB] (26 MBps) [2024-11-29T08:00:16.398Z] Copying: 485/1024 [MB] (15 MBps) [2024-11-29T08:00:17.779Z] Copying: 501/1024 [MB] (15 MBps) [2024-11-29T08:00:18.351Z] Copying: 542/1024 [MB] (41 MBps) [2024-11-29T08:00:19.740Z] Copying: 567/1024 [MB] (24 MBps) [2024-11-29T08:00:20.684Z] Copying: 584/1024 [MB] (17 MBps) [2024-11-29T08:00:21.630Z] Copying: 601/1024 [MB] (17 MBps) [2024-11-29T08:00:22.573Z] Copying: 619/1024 [MB] (18 MBps) [2024-11-29T08:00:23.518Z] Copying: 642/1024 [MB] (22 MBps) [2024-11-29T08:00:24.466Z] Copying: 667/1024 [MB] (24 MBps) [2024-11-29T08:00:25.403Z] Copying: 695/1024 [MB] (28 MBps) [2024-11-29T08:00:26.363Z] Copying: 738/1024 [MB] (42 MBps) [2024-11-29T08:00:27.398Z] Copying: 777/1024 [MB] (38 MBps) [2024-11-29T08:00:28.345Z] Copying: 806/1024 [MB] (29 MBps) [2024-11-29T08:00:29.763Z] Copying: 837/1024 [MB] (30 MBps) [2024-11-29T08:00:30.701Z] Copying: 866/1024 [MB] (29 MBps) [2024-11-29T08:00:31.645Z] Copying: 908/1024 [MB] (42 MBps) [2024-11-29T08:00:32.589Z] Copying: 936/1024 [MB] (27 MBps) [2024-11-29T08:00:33.534Z] Copying: 963/1024 [MB] (26 MBps) [2024-11-29T08:00:34.478Z] Copying: 993/1024 [MB] (30 MBps) [2024-11-29T08:00:34.478Z] Copying: 1023/1024 [MB] (29 MBps) [2024-11-29T08:00:34.478Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-29 08:00:34.463714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.534 [2024-11-29 08:00:34.463812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.534 [2024-11-29 08:00:34.463831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.534 [2024-11-29 08:00:34.463843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.534 [2024-11-29 08:00:34.463872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:44.534 [2024-11-29 08:00:34.467909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.534 [2024-11-29 08:00:34.467959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.534 [2024-11-29 08:00:34.467974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:28:44.534 [2024-11-29 08:00:34.467984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.534 [2024-11-29 08:00:34.468288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.534 [2024-11-29 08:00:34.468307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.534 [2024-11-29 08:00:34.468320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:28:44.534 [2024-11-29 08:00:34.468330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.482910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.482965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.797 [2024-11-29 08:00:34.482978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.559 ms 00:28:44.797 [2024-11-29 08:00:34.482987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.489128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.489168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.797 [2024-11-29 08:00:34.489186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:28:44.797 [2024-11-29 08:00:34.489195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.515256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.515304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.797 [2024-11-29 08:00:34.515316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.002 ms 00:28:44.797 [2024-11-29 08:00:34.515324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.530673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.530720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.797 [2024-11-29 08:00:34.530732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.302 ms 00:28:44.797 [2024-11-29 08:00:34.530740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.535566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.535611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:44.797 [2024-11-29 08:00:34.535622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:28:44.797 [2024-11-29 08:00:34.535637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.561231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.561280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:44.797 [2024-11-29 08:00:34.561293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.577 ms 00:28:44.797 [2024-11-29 08:00:34.561300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.585905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.585952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:44.797 [2024-11-29 08:00:34.585963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.562 ms 00:28:44.797 [2024-11-29 08:00:34.585971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.612373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.612426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:44.797 [2024-11-29 08:00:34.612440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.356 ms 00:28:44.797 [2024-11-29 08:00:34.612461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.637111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.797 [2024-11-29 08:00:34.637160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:44.797 [2024-11-29 08:00:34.637172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.557 ms 00:28:44.797 [2024-11-29 08:00:34.637180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.797 [2024-11-29 08:00:34.637224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:44.797 [2024-11-29 08:00:34.637241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.797 [2024-11-29 08:00:34.637253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:44.797 [2024-11-29 08:00:34.637262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:44.797 [2024-11-29 08:00:34.637423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.637996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:44.798 [2024-11-29 08:00:34.638094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:44.798 [2024-11-29 08:00:34.638103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b19a4395-9893-474f-acb4-ce9ded40bf2e 00:28:44.798 [2024-11-29 08:00:34.638111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:44.798 [2024-11-29 08:00:34.638119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 153280 00:28:44.799 [2024-11-29 08:00:34.638130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 151296 00:28:44.799 [2024-11-29 08:00:34.638139] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0131 00:28:44.799 [2024-11-29 08:00:34.638147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:44.799 [2024-11-29 08:00:34.638163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:44.799 [2024-11-29 08:00:34.638171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:44.799 [2024-11-29 08:00:34.638178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:44.799 [2024-11-29 08:00:34.638185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:44.799 [2024-11-29 08:00:34.638195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.799 [2024-11-29 08:00:34.638203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:44.799 [2024-11-29 08:00:34.638212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:28:44.799 [2024-11-29 08:00:34.638220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.651802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.799 [2024-11-29 08:00:34.651842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:44.799 [2024-11-29 08:00:34.651854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:28:44.799 [2024-11-29 08:00:34.651862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.652259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.799 [2024-11-29 08:00:34.652277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:44.799 [2024-11-29 08:00:34.652287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:28:44.799 [2024-11-29 08:00:34.652296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.688948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.799 [2024-11-29 08:00:34.689004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.799 [2024-11-29 08:00:34.689016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.799 [2024-11-29 08:00:34.689025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.689087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.799 [2024-11-29 08:00:34.689097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.799 [2024-11-29 08:00:34.689105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.799 [2024-11-29 08:00:34.689114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.689207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.799 [2024-11-29 08:00:34.689219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.799 [2024-11-29 08:00:34.689227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.799 [2024-11-29 08:00:34.689235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.799 [2024-11-29 08:00:34.689252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.799 [2024-11-29 08:00:34.689260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.799 [2024-11-29 08:00:34.689268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.799 [2024-11-29 08:00:34.689276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.060 [2024-11-29 08:00:34.773724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.060 [2024-11-29 08:00:34.773781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.061 [2024-11-29 08:00:34.773795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.773803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.061 [2024-11-29 08:00:34.842560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.842569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.061 [2024-11-29 08:00:34.842655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.842664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.061 [2024-11-29 08:00:34.842742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.842751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.061 [2024-11-29 08:00:34.842873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.842882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:45.061 [2024-11-29 08:00:34.842936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.842944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.842984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.842995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:45.061 [2024-11-29 08:00:34.843006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.843014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.843060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.061 [2024-11-29 08:00:34.843070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:45.061 [2024-11-29 08:00:34.843078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.061 [2024-11-29 08:00:34.843087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.061 [2024-11-29 08:00:34.843220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.493 ms, result 0 00:28:46.003 00:28:46.003 00:28:46.003 08:00:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:47.914 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:47.914 08:00:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.914 [2024-11-29 08:00:37.812796] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:47.914 [2024-11-29 08:00:37.812918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82188 ] 00:28:48.173 [2024-11-29 08:00:37.970426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.174 [2024-11-29 08:00:38.045051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.432 [2024-11-29 08:00:38.256137] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.432 [2024-11-29 08:00:38.256187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.691 [2024-11-29 08:00:38.407212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.407249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.691 [2024-11-29 08:00:38.407260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.691 [2024-11-29 08:00:38.407266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.407300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.407310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.691 [2024-11-29 08:00:38.407316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:48.691 [2024-11-29 08:00:38.407321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.407333] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.691 [2024-11-29 08:00:38.407841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.691 [2024-11-29 08:00:38.407853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.407858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.691 [2024-11-29 08:00:38.407864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:28:48.691 [2024-11-29 08:00:38.407870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.408782] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.691 [2024-11-29 08:00:38.419541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.419573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.691 [2024-11-29 08:00:38.419582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.759 ms 00:28:48.691 [2024-11-29 08:00:38.419589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.419637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.419645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.691 [2024-11-29 08:00:38.419652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:48.691 [2024-11-29 08:00:38.419658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.424121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.424148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.691 [2024-11-29 08:00:38.424156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:28:48.691 [2024-11-29 08:00:38.424165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.424219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.424226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.691 [2024-11-29 08:00:38.424233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:48.691 [2024-11-29 08:00:38.424238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.424272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.424279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.691 [2024-11-29 08:00:38.424285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.691 [2024-11-29 08:00:38.424290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.424307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.691 [2024-11-29 08:00:38.427061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.427086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.691 [2024-11-29 08:00:38.427095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:28:48.691 [2024-11-29 08:00:38.427101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.427127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.427133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.691 [2024-11-29 08:00:38.427139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:48.691 [2024-11-29 08:00:38.427144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.427157] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.691 [2024-11-29 08:00:38.427171] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.691 [2024-11-29 08:00:38.427196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.691 [2024-11-29 08:00:38.427209] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.691 [2024-11-29 08:00:38.427287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.691 [2024-11-29 08:00:38.427295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.691 [2024-11-29 08:00:38.427303] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.691 [2024-11-29 08:00:38.427310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427316] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427322] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.691 [2024-11-29 08:00:38.427327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.691 [2024-11-29 08:00:38.427335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.691 [2024-11-29 08:00:38.427341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.691 [2024-11-29 08:00:38.427347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.427352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.691 [2024-11-29 08:00:38.427358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:28:48.691 [2024-11-29 08:00:38.427363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.427426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.691 [2024-11-29 08:00:38.427432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.691 [2024-11-29 08:00:38.427437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:48.691 [2024-11-29 08:00:38.427466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.691 [2024-11-29 08:00:38.427555] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.691 [2024-11-29 08:00:38.427564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.691 [2024-11-29 08:00:38.427570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.691 [2024-11-29 08:00:38.427587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.691 [2024-11-29 08:00:38.427602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.691 [2024-11-29 08:00:38.427612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.691 [2024-11-29 08:00:38.427617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.691 [2024-11-29 08:00:38.427624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.691 [2024-11-29 08:00:38.427633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.691 [2024-11-29 08:00:38.427638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.691 [2024-11-29 08:00:38.427643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.691 [2024-11-29 08:00:38.427653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.691 [2024-11-29 08:00:38.427667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.691 [2024-11-29 08:00:38.427677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.691 [2024-11-29 08:00:38.427682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.691 [2024-11-29 08:00:38.427687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.692 [2024-11-29 08:00:38.427692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.692 [2024-11-29 08:00:38.427697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.692 [2024-11-29 08:00:38.427707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.692 [2024-11-29 08:00:38.427712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.692 [2024-11-29 08:00:38.427722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.692 [2024-11-29 08:00:38.427727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.692 [2024-11-29 08:00:38.427737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.692 [2024-11-29 08:00:38.427742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.692 [2024-11-29 08:00:38.427747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.692 [2024-11-29 08:00:38.427752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.692 [2024-11-29 08:00:38.427757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.692 [2024-11-29 08:00:38.427762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.692 [2024-11-29 08:00:38.427772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.692 [2024-11-29 08:00:38.427776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.692 [2024-11-29 08:00:38.427787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.692 [2024-11-29 08:00:38.427793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.692 [2024-11-29 08:00:38.427798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.692 [2024-11-29 08:00:38.427804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.692 [2024-11-29 08:00:38.427809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.692 [2024-11-29 08:00:38.427814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.692 [2024-11-29 08:00:38.427820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.692 [2024-11-29 08:00:38.427825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.692 [2024-11-29 08:00:38.427829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.692 [2024-11-29 08:00:38.427835] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.692 [2024-11-29 08:00:38.427842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.692 [2024-11-29 08:00:38.427855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.692 [2024-11-29 08:00:38.427860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.692 [2024-11-29 08:00:38.427865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.692 [2024-11-29 08:00:38.427870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.692 [2024-11-29 08:00:38.427875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.692 [2024-11-29 08:00:38.427881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.692 [2024-11-29 08:00:38.427886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.692 [2024-11-29 08:00:38.427891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.692 [2024-11-29 08:00:38.427896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.692 [2024-11-29 08:00:38.427922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.692 [2024-11-29 08:00:38.427928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.692 [2024-11-29 08:00:38.427939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.692 [2024-11-29 08:00:38.427945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.692 [2024-11-29 08:00:38.427950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.692 [2024-11-29 08:00:38.427956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.427964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.692 [2024-11-29 08:00:38.427970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:28:48.692 [2024-11-29 08:00:38.427975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.448734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.448761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.692 [2024-11-29 08:00:38.448769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.727 ms 00:28:48.692 [2024-11-29 08:00:38.448777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.448840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.448846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.692 [2024-11-29 08:00:38.448853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:48.692 [2024-11-29 08:00:38.448859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.486770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.486802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.692 [2024-11-29 08:00:38.486811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.873 ms 00:28:48.692 [2024-11-29 08:00:38.486817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.486846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.486854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.692 [2024-11-29 08:00:38.486863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:48.692 [2024-11-29 08:00:38.486869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.487175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.487194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.692 [2024-11-29 08:00:38.487200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:28:48.692 [2024-11-29 08:00:38.487206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.487302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.487309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.692 [2024-11-29 08:00:38.487315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:48.692 [2024-11-29 08:00:38.487324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.497772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.497798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.692 [2024-11-29 08:00:38.497807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.433 ms 00:28:48.692 [2024-11-29 08:00:38.497812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.507632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.692 [2024-11-29 08:00:38.507660] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.692 [2024-11-29 08:00:38.507669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.507675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.692 [2024-11-29 08:00:38.507681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.790 ms 00:28:48.692 [2024-11-29 08:00:38.507687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.525929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.525963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.692 [2024-11-29 08:00:38.525973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.212 ms 00:28:48.692 [2024-11-29 08:00:38.525978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.534719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.534745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.692 [2024-11-29 08:00:38.534752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.707 ms 00:28:48.692 [2024-11-29 08:00:38.534757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.543142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.543168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.692 [2024-11-29 08:00:38.543175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.361 ms 00:28:48.692 [2024-11-29 08:00:38.543180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.692 [2024-11-29 08:00:38.543638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.692 [2024-11-29 08:00:38.543659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.693 [2024-11-29 08:00:38.543668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:28:48.693 [2024-11-29 08:00:38.543673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.586673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.586710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.693 [2024-11-29 08:00:38.586723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.986 ms 00:28:48.693 [2024-11-29 08:00:38.586729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.594524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.693 [2024-11-29 08:00:38.596203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.596228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.693 [2024-11-29 08:00:38.596236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.442 ms 00:28:48.693 [2024-11-29 08:00:38.596242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.596295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.596305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.693 [2024-11-29 08:00:38.596314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:48.693 [2024-11-29 08:00:38.596319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.596785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.596807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.693 [2024-11-29 08:00:38.596815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:28:48.693 [2024-11-29 08:00:38.596821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.596838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.596845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.693 [2024-11-29 08:00:38.596851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:48.693 [2024-11-29 08:00:38.596857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.596894] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.693 [2024-11-29 08:00:38.596902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.596908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.693 [2024-11-29 08:00:38.596914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:48.693 [2024-11-29 08:00:38.596919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.614644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.614671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.693 [2024-11-29 08:00:38.614683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.712 ms 00:28:48.693 [2024-11-29 08:00:38.614690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.614743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.693 [2024-11-29 08:00:38.614750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.693 [2024-11-29 08:00:38.614756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:48.693 [2024-11-29 08:00:38.614762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.693 [2024-11-29 08:00:38.615471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.921 ms, result 0 00:28:50.077  [2024-11-29T08:00:40.964Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-29T08:00:41.910Z] Copying: 43/1024 [MB] (16 MBps) [2024-11-29T08:00:42.855Z] Copying: 61/1024 [MB] (17 MBps) [2024-11-29T08:00:43.799Z] Copying: 80/1024 [MB] (19 MBps) [2024-11-29T08:00:45.183Z] Copying: 96/1024 [MB] (15 MBps) [2024-11-29T08:00:45.755Z] Copying: 108/1024 [MB] (12 MBps) [2024-11-29T08:00:47.142Z] Copying: 124/1024 [MB] (15 MBps) [2024-11-29T08:00:48.087Z] Copying: 136/1024 [MB] (11 MBps) [2024-11-29T08:00:49.031Z] Copying: 160/1024 [MB] (24 MBps) [2024-11-29T08:00:49.977Z] Copying: 179/1024 [MB] (18 MBps) [2024-11-29T08:00:50.923Z] Copying: 202/1024 [MB] (23 MBps) [2024-11-29T08:00:51.869Z] Copying: 216/1024 [MB] (14 MBps) [2024-11-29T08:00:52.810Z] Copying: 228/1024 [MB] (11 MBps) [2024-11-29T08:00:53.755Z] Copying: 247/1024 [MB] (18 MBps) [2024-11-29T08:00:54.816Z] Copying: 264/1024 [MB] (16 MBps) [2024-11-29T08:00:55.758Z] Copying: 279/1024 [MB] (15 MBps) [2024-11-29T08:00:57.146Z] Copying: 298/1024 [MB] (19 MBps) [2024-11-29T08:00:58.092Z] Copying: 318/1024 [MB] (20 MBps) [2024-11-29T08:00:59.036Z] Copying: 339/1024 [MB] (20 MBps) [2024-11-29T08:00:59.982Z] Copying: 359/1024 [MB] (19 MBps) [2024-11-29T08:01:00.928Z] Copying: 373/1024 [MB] (14 MBps) [2024-11-29T08:01:01.872Z] Copying: 391/1024 [MB] (17 MBps) [2024-11-29T08:01:02.816Z] Copying: 405/1024 [MB] (13 MBps) [2024-11-29T08:01:03.762Z] Copying: 415/1024 [MB] (10 MBps) [2024-11-29T08:01:05.152Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-29T08:01:06.097Z] Copying: 449/1024 [MB] (22 MBps) [2024-11-29T08:01:07.043Z] Copying: 469/1024 [MB] (19 MBps) [2024-11-29T08:01:07.987Z] Copying: 484/1024 [MB] (15 MBps) [2024-11-29T08:01:08.932Z] Copying: 501/1024 [MB] (16 MBps) [2024-11-29T08:01:09.877Z] Copying: 519/1024 [MB] (18 MBps) [2024-11-29T08:01:10.822Z] Copying: 535/1024 [MB] (15 MBps) [2024-11-29T08:01:11.768Z] Copying: 552/1024 [MB] (17 MBps) [2024-11-29T08:01:13.155Z] Copying: 572/1024 [MB] (19 MBps) [2024-11-29T08:01:14.099Z] Copying: 589/1024 [MB] (16 MBps) [2024-11-29T08:01:15.043Z] Copying: 606/1024 [MB] (17 MBps) [2024-11-29T08:01:15.987Z] Copying: 626/1024 [MB] (19 MBps) [2024-11-29T08:01:16.929Z] Copying: 648/1024 [MB] (22 MBps) [2024-11-29T08:01:17.868Z] Copying: 671/1024 [MB] (22 MBps) [2024-11-29T08:01:18.813Z] Copying: 684/1024 [MB] (12 MBps) [2024-11-29T08:01:19.760Z] Copying: 697/1024 [MB] (13 MBps) [2024-11-29T08:01:21.147Z] Copying: 715/1024 [MB] (17 MBps) [2024-11-29T08:01:22.091Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-29T08:01:23.037Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-29T08:01:24.035Z] Copying: 780/1024 [MB] (23 MBps) [2024-11-29T08:01:24.982Z] Copying: 799/1024 [MB] (19 MBps) [2024-11-29T08:01:25.926Z] Copying: 819/1024 [MB] (19 MBps) [2024-11-29T08:01:26.873Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-29T08:01:27.818Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-29T08:01:28.765Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-29T08:01:30.154Z] Copying: 874/1024 [MB] (16 MBps) [2024-11-29T08:01:31.101Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-29T08:01:32.049Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-29T08:01:32.994Z] Copying: 922/1024 [MB] (14 MBps) [2024-11-29T08:01:33.940Z] Copying: 944/1024 [MB] (22 MBps) [2024-11-29T08:01:34.885Z] Copying: 961/1024 [MB] (17 MBps) [2024-11-29T08:01:35.829Z] Copying: 979/1024 [MB] (17 MBps) [2024-11-29T08:01:36.775Z] Copying: 991/1024 [MB] (12 MBps) [2024-11-29T08:01:38.164Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-29T08:01:38.425Z] Copying: 1016/1024 [MB] (13 MBps) [2024-11-29T08:01:38.996Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 08:01:38.930283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.052 [2024-11-29 08:01:38.930389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:49.052 [2024-11-29 08:01:38.930417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:49.052 [2024-11-29 08:01:38.930434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.052 [2024-11-29 08:01:38.930504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:49.052 [2024-11-29 08:01:38.934801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.052 [2024-11-29 08:01:38.934847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:49.052 [2024-11-29 08:01:38.934859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 00:29:49.052 [2024-11-29 08:01:38.934869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.052 [2024-11-29 08:01:38.935120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.052 [2024-11-29 08:01:38.935130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:49.052 [2024-11-29 08:01:38.935140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:29:49.052 [2024-11-29 08:01:38.935148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.052 [2024-11-29 08:01:38.938608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.052 [2024-11-29 08:01:38.938626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:49.052 [2024-11-29 08:01:38.938636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:29:49.052 [2024-11-29 08:01:38.938649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.052 [2024-11-29 08:01:38.945808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.053 [2024-11-29 08:01:38.945846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:49.053 [2024-11-29 08:01:38.945859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.142 ms 00:29:49.053 [2024-11-29 08:01:38.945870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.053 [2024-11-29 08:01:38.974159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.053 [2024-11-29 08:01:38.974207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:49.053 [2024-11-29 08:01:38.974221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.210 ms 00:29:49.053 [2024-11-29 08:01:38.974229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.053 [2024-11-29 08:01:38.990029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.053 [2024-11-29 08:01:38.990075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:49.053 [2024-11-29 08:01:38.990089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.748 ms 00:29:49.053 [2024-11-29 08:01:38.990098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:38.996060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.315 [2024-11-29 08:01:38.996104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:49.315 [2024-11-29 08:01:38.996115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.922 ms 00:29:49.315 [2024-11-29 08:01:38.996123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:39.021871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.315 [2024-11-29 08:01:39.021914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:49.315 [2024-11-29 08:01:39.021927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:29:49.315 [2024-11-29 08:01:39.021934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:39.047704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.315 [2024-11-29 08:01:39.047744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:49.315 [2024-11-29 08:01:39.047756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.723 ms 00:29:49.315 [2024-11-29 08:01:39.047764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:39.072678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.315 [2024-11-29 08:01:39.072719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:49.315 [2024-11-29 08:01:39.072731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.864 ms 00:29:49.315 [2024-11-29 08:01:39.072738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:39.097580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.315 [2024-11-29 08:01:39.097621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:49.315 [2024-11-29 08:01:39.097633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.765 ms 00:29:49.315 [2024-11-29 08:01:39.097641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.315 [2024-11-29 08:01:39.097686] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:49.315 [2024-11-29 08:01:39.097709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:49.315 [2024-11-29 08:01:39.097724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:49.315 [2024-11-29 08:01:39.097733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:49.315 [2024-11-29 08:01:39.097893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.097993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:49.316 [2024-11-29 08:01:39.098521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:49.316 [2024-11-29 08:01:39.098529] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b19a4395-9893-474f-acb4-ce9ded40bf2e 00:29:49.316 [2024-11-29 08:01:39.098538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:49.316 [2024-11-29 08:01:39.098546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:49.316 [2024-11-29 08:01:39.098553] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:49.316 [2024-11-29 08:01:39.098562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:49.316 [2024-11-29 08:01:39.098578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:49.316 [2024-11-29 08:01:39.098586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:49.316 [2024-11-29 08:01:39.098593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:49.316 [2024-11-29 08:01:39.098600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:49.316 [2024-11-29 08:01:39.098606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:49.316 [2024-11-29 08:01:39.098614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.316 [2024-11-29 08:01:39.098622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:49.316 [2024-11-29 08:01:39.098632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:29:49.316 [2024-11-29 08:01:39.098642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.316 [2024-11-29 08:01:39.112248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.317 [2024-11-29 08:01:39.112285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:49.317 [2024-11-29 08:01:39.112297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.586 ms 00:29:49.317 [2024-11-29 08:01:39.112305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.112738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.317 [2024-11-29 08:01:39.112758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:49.317 [2024-11-29 08:01:39.112769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:29:49.317 [2024-11-29 08:01:39.112776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.149463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.317 [2024-11-29 08:01:39.149505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.317 [2024-11-29 08:01:39.149518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.317 [2024-11-29 08:01:39.149527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.149592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.317 [2024-11-29 08:01:39.149606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.317 [2024-11-29 08:01:39.149616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.317 [2024-11-29 08:01:39.149626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.149714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.317 [2024-11-29 08:01:39.149726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.317 [2024-11-29 08:01:39.149736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.317 [2024-11-29 08:01:39.149745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.149763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.317 [2024-11-29 08:01:39.149772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.317 [2024-11-29 08:01:39.149785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.317 [2024-11-29 08:01:39.149812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.317 [2024-11-29 08:01:39.234090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.317 [2024-11-29 08:01:39.234142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.317 [2024-11-29 08:01:39.234155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.317 [2024-11-29 08:01:39.234164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.304763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.304824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.578 [2024-11-29 08:01:39.304836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.304845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.304912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.304923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.578 [2024-11-29 08:01:39.304932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.304941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.305015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.578 [2024-11-29 08:01:39.305025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.305035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.305144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.578 [2024-11-29 08:01:39.305181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.305189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.305234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:49.578 [2024-11-29 08:01:39.305243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.305251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.305309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.578 [2024-11-29 08:01:39.305318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.305326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.578 [2024-11-29 08:01:39.305385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.578 [2024-11-29 08:01:39.305394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.578 [2024-11-29 08:01:39.305405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.578 [2024-11-29 08:01:39.305570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.271 ms, result 0 00:29:50.175 00:29:50.175 00:29:50.175 08:01:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:52.719 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:52.719 Process with pid 80244 is not found 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80244 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80244 ']' 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80244 00:29:52.719 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80244) - No such process 00:29:52.719 08:01:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80244 is not found' 00:29:52.720 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:52.981 Remove shared memory files 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:52.981 00:29:52.981 real 4m9.650s 00:29:52.981 user 4m26.309s 00:29:52.981 sys 0m25.014s 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.981 08:01:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.981 ************************************ 00:29:52.981 END TEST ftl_dirty_shutdown 00:29:52.981 ************************************ 00:29:53.243 08:01:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:53.244 08:01:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:53.244 08:01:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.244 08:01:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 ************************************ 00:29:53.244 START TEST ftl_upgrade_shutdown 00:29:53.244 ************************************ 00:29:53.244 08:01:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:53.244 * Looking for test storage... 00:29:53.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.244 --rc genhtml_branch_coverage=1 00:29:53.244 --rc genhtml_function_coverage=1 00:29:53.244 --rc genhtml_legend=1 00:29:53.244 --rc geninfo_all_blocks=1 00:29:53.244 --rc geninfo_unexecuted_blocks=1 00:29:53.244 00:29:53.244 ' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.244 --rc genhtml_branch_coverage=1 00:29:53.244 --rc genhtml_function_coverage=1 00:29:53.244 --rc genhtml_legend=1 00:29:53.244 --rc geninfo_all_blocks=1 00:29:53.244 --rc geninfo_unexecuted_blocks=1 00:29:53.244 00:29:53.244 ' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.244 --rc genhtml_branch_coverage=1 00:29:53.244 --rc genhtml_function_coverage=1 00:29:53.244 --rc genhtml_legend=1 00:29:53.244 --rc geninfo_all_blocks=1 00:29:53.244 --rc geninfo_unexecuted_blocks=1 00:29:53.244 00:29:53.244 ' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.244 --rc genhtml_branch_coverage=1 00:29:53.244 --rc genhtml_function_coverage=1 00:29:53.244 --rc genhtml_legend=1 00:29:53.244 --rc geninfo_all_blocks=1 00:29:53.244 --rc geninfo_unexecuted_blocks=1 00:29:53.244 00:29:53.244 ' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82920 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82920 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82920 ']' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.244 08:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.506 [2024-11-29 08:01:43.205601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:53.507 [2024-11-29 08:01:43.205758] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82920 ] 00:29:53.507 [2024-11-29 08:01:43.370383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.777 [2024-11-29 08:01:43.490693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:54.356 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:54.619 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:54.882 { 00:29:54.882 "name": "basen1", 00:29:54.882 "aliases": [ 00:29:54.882 "ae45cb6b-826e-4b6c-b9b2-3666ea182f91" 00:29:54.882 ], 00:29:54.882 "product_name": "NVMe disk", 00:29:54.882 "block_size": 4096, 00:29:54.882 "num_blocks": 1310720, 00:29:54.882 "uuid": "ae45cb6b-826e-4b6c-b9b2-3666ea182f91", 00:29:54.882 "numa_id": -1, 00:29:54.882 "assigned_rate_limits": { 00:29:54.882 "rw_ios_per_sec": 0, 00:29:54.882 "rw_mbytes_per_sec": 0, 00:29:54.882 "r_mbytes_per_sec": 0, 00:29:54.882 "w_mbytes_per_sec": 0 00:29:54.882 }, 00:29:54.882 "claimed": true, 00:29:54.882 "claim_type": "read_many_write_one", 00:29:54.882 "zoned": false, 00:29:54.882 "supported_io_types": { 00:29:54.882 "read": true, 00:29:54.882 "write": true, 00:29:54.882 "unmap": true, 00:29:54.882 "flush": true, 00:29:54.882 "reset": true, 00:29:54.882 "nvme_admin": true, 00:29:54.882 "nvme_io": true, 00:29:54.882 "nvme_io_md": false, 00:29:54.882 "write_zeroes": true, 00:29:54.882 "zcopy": false, 00:29:54.882 "get_zone_info": false, 00:29:54.882 "zone_management": false, 00:29:54.882 "zone_append": false, 00:29:54.882 "compare": true, 00:29:54.882 "compare_and_write": false, 00:29:54.882 "abort": true, 00:29:54.882 "seek_hole": false, 00:29:54.882 "seek_data": false, 00:29:54.882 "copy": true, 00:29:54.882 "nvme_iov_md": false 00:29:54.882 }, 00:29:54.882 "driver_specific": { 00:29:54.882 "nvme": [ 00:29:54.882 { 00:29:54.882 "pci_address": "0000:00:11.0", 00:29:54.882 "trid": { 00:29:54.882 "trtype": "PCIe", 00:29:54.882 "traddr": "0000:00:11.0" 00:29:54.882 }, 00:29:54.882 "ctrlr_data": { 00:29:54.882 "cntlid": 0, 00:29:54.882 "vendor_id": "0x1b36", 00:29:54.882 "model_number": "QEMU NVMe Ctrl", 00:29:54.882 "serial_number": "12341", 00:29:54.882 "firmware_revision": "8.0.0", 00:29:54.882 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:54.882 "oacs": { 00:29:54.882 "security": 0, 00:29:54.882 "format": 1, 00:29:54.882 "firmware": 0, 00:29:54.882 "ns_manage": 1 00:29:54.882 }, 00:29:54.882 "multi_ctrlr": false, 00:29:54.882 "ana_reporting": false 00:29:54.882 }, 00:29:54.882 "vs": { 00:29:54.882 "nvme_version": "1.4" 00:29:54.882 }, 00:29:54.882 "ns_data": { 00:29:54.882 "id": 1, 00:29:54.882 "can_share": false 00:29:54.882 } 00:29:54.882 } 00:29:54.882 ], 00:29:54.882 "mp_policy": "active_passive" 00:29:54.882 } 00:29:54.882 } 00:29:54.882 ]' 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.882 08:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:55.144 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a88a7448-de78-419a-9cbb-10532c28d868 00:29:55.144 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:55.144 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a88a7448-de78-419a-9cbb-10532c28d868 00:29:55.406 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:55.668 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=55edc61c-77de-487d-8d09-a8071fe7f212 00:29:55.668 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 55edc61c-77de-487d-8d09-a8071fe7f212 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d753825a-aa19-4cfb-8c44-61c84f4392c3 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d753825a-aa19-4cfb-8c44-61c84f4392c3 ]] 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d753825a-aa19-4cfb-8c44-61c84f4392c3 5120 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d753825a-aa19-4cfb-8c44-61c84f4392c3 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d753825a-aa19-4cfb-8c44-61c84f4392c3 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d753825a-aa19-4cfb-8c44-61c84f4392c3 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:55.931 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d753825a-aa19-4cfb-8c44-61c84f4392c3 00:29:56.201 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:56.201 { 00:29:56.201 "name": "d753825a-aa19-4cfb-8c44-61c84f4392c3", 00:29:56.201 "aliases": [ 00:29:56.201 "lvs/basen1p0" 00:29:56.201 ], 00:29:56.201 "product_name": "Logical Volume", 00:29:56.201 "block_size": 4096, 00:29:56.201 "num_blocks": 5242880, 00:29:56.201 "uuid": "d753825a-aa19-4cfb-8c44-61c84f4392c3", 00:29:56.202 "assigned_rate_limits": { 00:29:56.202 "rw_ios_per_sec": 0, 00:29:56.202 "rw_mbytes_per_sec": 0, 00:29:56.202 "r_mbytes_per_sec": 0, 00:29:56.202 "w_mbytes_per_sec": 0 00:29:56.202 }, 00:29:56.202 "claimed": false, 00:29:56.202 "zoned": false, 00:29:56.202 "supported_io_types": { 00:29:56.202 "read": true, 00:29:56.202 "write": true, 00:29:56.202 "unmap": true, 00:29:56.202 "flush": false, 00:29:56.202 "reset": true, 00:29:56.202 "nvme_admin": false, 00:29:56.202 "nvme_io": false, 00:29:56.202 "nvme_io_md": false, 00:29:56.202 "write_zeroes": true, 00:29:56.202 "zcopy": false, 00:29:56.202 "get_zone_info": false, 00:29:56.202 "zone_management": false, 00:29:56.202 "zone_append": false, 00:29:56.202 "compare": false, 00:29:56.202 "compare_and_write": false, 00:29:56.202 "abort": false, 00:29:56.202 "seek_hole": true, 00:29:56.202 "seek_data": true, 00:29:56.202 "copy": false, 00:29:56.202 "nvme_iov_md": false 00:29:56.202 }, 00:29:56.202 "driver_specific": { 00:29:56.202 "lvol": { 00:29:56.202 "lvol_store_uuid": "55edc61c-77de-487d-8d09-a8071fe7f212", 00:29:56.202 "base_bdev": "basen1", 00:29:56.202 "thin_provision": true, 00:29:56.202 "num_allocated_clusters": 0, 00:29:56.202 "snapshot": false, 00:29:56.202 "clone": false, 00:29:56.202 "esnap_clone": false 00:29:56.202 } 00:29:56.202 } 00:29:56.202 } 00:29:56.202 ]' 00:29:56.202 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:56.202 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:56.202 08:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:56.202 08:01:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:56.203 08:01:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:56.203 08:01:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:56.203 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:56.203 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:56.203 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:56.466 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:56.466 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:56.466 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:56.727 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:56.727 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:56.727 08:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d753825a-aa19-4cfb-8c44-61c84f4392c3 -c cachen1p0 --l2p_dram_limit 2 00:29:56.988 [2024-11-29 08:01:46.699025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.699094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:56.988 [2024-11-29 08:01:46.699112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.988 [2024-11-29 08:01:46.699121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.699198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.699209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.988 [2024-11-29 08:01:46.699220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:56.988 [2024-11-29 08:01:46.699229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.699253] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:56.988 [2024-11-29 08:01:46.700051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:56.988 [2024-11-29 08:01:46.700085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.700094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.988 [2024-11-29 08:01:46.700105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:29:56.988 [2024-11-29 08:01:46.700114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.700193] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b15728e8-ebda-4091-89bb-fcd808674f5c 00:29:56.988 [2024-11-29 08:01:46.701977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.702027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:56.988 [2024-11-29 08:01:46.702038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:56.988 [2024-11-29 08:01:46.702048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.710611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.710660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.988 [2024-11-29 08:01:46.710671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.517 ms 00:29:56.988 [2024-11-29 08:01:46.710682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.710729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.988 [2024-11-29 08:01:46.710741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.988 [2024-11-29 08:01:46.710750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:56.988 [2024-11-29 08:01:46.710762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.988 [2024-11-29 08:01:46.710823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.989 [2024-11-29 08:01:46.710837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:56.989 [2024-11-29 08:01:46.710848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:56.989 [2024-11-29 08:01:46.710860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.989 [2024-11-29 08:01:46.710885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:56.989 [2024-11-29 08:01:46.715298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.989 [2024-11-29 08:01:46.715339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.989 [2024-11-29 08:01:46.715353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.418 ms 00:29:56.989 [2024-11-29 08:01:46.715361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.989 [2024-11-29 08:01:46.715395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.989 [2024-11-29 08:01:46.715403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:56.989 [2024-11-29 08:01:46.715414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.989 [2024-11-29 08:01:46.715422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.989 [2024-11-29 08:01:46.715472] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:56.989 [2024-11-29 08:01:46.715620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:56.989 [2024-11-29 08:01:46.715638] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:56.989 [2024-11-29 08:01:46.715649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:56.989 [2024-11-29 08:01:46.715662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:56.989 [2024-11-29 08:01:46.715671] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:56.989 [2024-11-29 08:01:46.715682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:56.989 [2024-11-29 08:01:46.715692] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:56.989 [2024-11-29 08:01:46.715701] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:56.989 [2024-11-29 08:01:46.715709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:56.989 [2024-11-29 08:01:46.715720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.989 [2024-11-29 08:01:46.715727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:56.989 [2024-11-29 08:01:46.715737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:29:56.989 [2024-11-29 08:01:46.715744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.989 [2024-11-29 08:01:46.715830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.989 [2024-11-29 08:01:46.715849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:56.989 [2024-11-29 08:01:46.715859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:29:56.989 [2024-11-29 08:01:46.715867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.989 [2024-11-29 08:01:46.715975] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:56.989 [2024-11-29 08:01:46.715994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:56.989 [2024-11-29 08:01:46.716006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:56.989 [2024-11-29 08:01:46.716033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:56.989 [2024-11-29 08:01:46.716048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:56.989 [2024-11-29 08:01:46.716057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:56.989 [2024-11-29 08:01:46.716064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:56.989 [2024-11-29 08:01:46.716081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:56.989 [2024-11-29 08:01:46.716090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:56.989 [2024-11-29 08:01:46.716106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:56.989 [2024-11-29 08:01:46.716113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:56.989 [2024-11-29 08:01:46.716132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:56.989 [2024-11-29 08:01:46.716143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:56.989 [2024-11-29 08:01:46.716161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:56.989 [2024-11-29 08:01:46.716184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:56.989 [2024-11-29 08:01:46.716208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:56.989 [2024-11-29 08:01:46.716230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:56.989 [2024-11-29 08:01:46.716256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:56.989 [2024-11-29 08:01:46.716277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:56.989 [2024-11-29 08:01:46.716301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:56.989 [2024-11-29 08:01:46.716323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:56.989 [2024-11-29 08:01:46.716331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716338] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:56.989 [2024-11-29 08:01:46.716348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:56.989 [2024-11-29 08:01:46.716356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.989 [2024-11-29 08:01:46.716375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:56.989 [2024-11-29 08:01:46.716386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:56.989 [2024-11-29 08:01:46.716392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:56.989 [2024-11-29 08:01:46.716401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:56.989 [2024-11-29 08:01:46.716410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:56.989 [2024-11-29 08:01:46.716425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:56.989 [2024-11-29 08:01:46.716436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:56.989 [2024-11-29 08:01:46.716471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:56.989 [2024-11-29 08:01:46.716490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:56.989 [2024-11-29 08:01:46.716515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:56.989 [2024-11-29 08:01:46.716525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:56.989 [2024-11-29 08:01:46.716532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:56.989 [2024-11-29 08:01:46.716542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:56.989 [2024-11-29 08:01:46.716595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:56.989 [2024-11-29 08:01:46.716602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:56.989 [2024-11-29 08:01:46.716613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.990 [2024-11-29 08:01:46.716621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.990 [2024-11-29 08:01:46.716631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:56.990 [2024-11-29 08:01:46.716638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:56.990 [2024-11-29 08:01:46.716647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:56.990 [2024-11-29 08:01:46.716655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.990 [2024-11-29 08:01:46.716665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:56.990 [2024-11-29 08:01:46.716673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:29:56.990 [2024-11-29 08:01:46.716682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.990 [2024-11-29 08:01:46.716721] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:56.990 [2024-11-29 08:01:46.716741] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:01.200 [2024-11-29 08:01:50.416915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.417007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:01.200 [2024-11-29 08:01:50.417024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3700.177 ms 00:30:01.200 [2024-11-29 08:01:50.417037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.448566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.448630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:01.200 [2024-11-29 08:01:50.448645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.273 ms 00:30:01.200 [2024-11-29 08:01:50.448657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.448745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.448760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:01.200 [2024-11-29 08:01:50.448770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:01.200 [2024-11-29 08:01:50.448787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.483961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.484015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:01.200 [2024-11-29 08:01:50.484027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.137 ms 00:30:01.200 [2024-11-29 08:01:50.484038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.484079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.484090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:01.200 [2024-11-29 08:01:50.484099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:01.200 [2024-11-29 08:01:50.484109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.484730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.484770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:01.200 [2024-11-29 08:01:50.484789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:30:01.200 [2024-11-29 08:01:50.484800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.484845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.484859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:01.200 [2024-11-29 08:01:50.484868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:01.200 [2024-11-29 08:01:50.484880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.502062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.200 [2024-11-29 08:01:50.502108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:01.200 [2024-11-29 08:01:50.502119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.161 ms 00:30:01.200 [2024-11-29 08:01:50.502129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.200 [2024-11-29 08:01:50.524908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:01.200 [2024-11-29 08:01:50.526268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.526315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:01.201 [2024-11-29 08:01:50.526331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.051 ms 00:30:01.201 [2024-11-29 08:01:50.526340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.556880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.556932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:01.201 [2024-11-29 08:01:50.556949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.491 ms 00:30:01.201 [2024-11-29 08:01:50.556958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.557066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.557077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:01.201 [2024-11-29 08:01:50.557093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:30:01.201 [2024-11-29 08:01:50.557102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.582310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.582357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:01.201 [2024-11-29 08:01:50.582374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.152 ms 00:30:01.201 [2024-11-29 08:01:50.582382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.607536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.607581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:01.201 [2024-11-29 08:01:50.607596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.097 ms 00:30:01.201 [2024-11-29 08:01:50.607603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.608203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.608246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:01.201 [2024-11-29 08:01:50.608262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:30:01.201 [2024-11-29 08:01:50.608270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.693713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.693767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:01.201 [2024-11-29 08:01:50.693787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.396 ms 00:30:01.201 [2024-11-29 08:01:50.693796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.721172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.721232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:01.201 [2024-11-29 08:01:50.721247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.262 ms 00:30:01.201 [2024-11-29 08:01:50.721256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.746530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.746581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:01.201 [2024-11-29 08:01:50.746595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.218 ms 00:30:01.201 [2024-11-29 08:01:50.746603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.772802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.772853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:01.201 [2024-11-29 08:01:50.772869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.145 ms 00:30:01.201 [2024-11-29 08:01:50.772876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.772932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.772942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:01.201 [2024-11-29 08:01:50.772957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:01.201 [2024-11-29 08:01:50.772964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.773258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.201 [2024-11-29 08:01:50.773272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:01.201 [2024-11-29 08:01:50.773284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:01.201 [2024-11-29 08:01:50.773292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.201 [2024-11-29 08:01:50.774486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4074.951 ms, result 0 00:30:01.201 { 00:30:01.201 "name": "ftl", 00:30:01.201 "uuid": "b15728e8-ebda-4091-89bb-fcd808674f5c" 00:30:01.201 } 00:30:01.201 08:01:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:01.201 [2024-11-29 08:01:50.993405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.201 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:01.463 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:01.725 [2024-11-29 08:01:51.417823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:01.725 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:01.725 [2024-11-29 08:01:51.631251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:01.725 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:02.298 Fill FTL, iteration 1 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83042 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83042 /var/tmp/spdk.tgt.sock 00:30:02.298 08:01:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83042 ']' 00:30:02.298 08:01:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:02.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:02.298 08:01:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.298 08:01:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:02.298 08:01:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.298 08:01:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:02.298 [2024-11-29 08:01:52.082920] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:02.298 [2024-11-29 08:01:52.083063] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83042 ] 00:30:02.559 [2024-11-29 08:01:52.248475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.559 [2024-11-29 08:01:52.372585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.215 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.215 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:03.215 08:01:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:03.474 ftln1 00:30:03.474 08:01:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:03.474 08:01:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83042 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83042 ']' 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83042 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83042 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:03.735 killing process with pid 83042 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83042' 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83042 00:30:03.735 08:01:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83042 00:30:05.113 08:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:05.113 08:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:05.114 [2024-11-29 08:01:55.006969] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:05.114 [2024-11-29 08:01:55.007074] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83085 ] 00:30:05.377 [2024-11-29 08:01:55.162775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.377 [2024-11-29 08:01:55.254729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.757  [2024-11-29T08:01:57.644Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-29T08:01:59.027Z] Copying: 398/1024 [MB] (207 MBps) [2024-11-29T08:01:59.970Z] Copying: 642/1024 [MB] (244 MBps) [2024-11-29T08:02:00.541Z] Copying: 876/1024 [MB] (234 MBps) [2024-11-29T08:02:01.112Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:30:11.168 00:30:11.168 Calculate MD5 checksum, iteration 1 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:11.168 08:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:11.168 [2024-11-29 08:02:00.967186] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:11.168 [2024-11-29 08:02:00.967303] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83149 ] 00:30:11.427 [2024-11-29 08:02:01.123482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.427 [2024-11-29 08:02:01.231746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.813  [2024-11-29T08:02:03.328Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-29T08:02:03.900Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:30:13.956 00:30:13.956 08:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:13.956 08:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:16.498 Fill FTL, iteration 2 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=093fe01f577cf1faa8a5480859b8f92b 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:16.498 08:02:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:16.498 [2024-11-29 08:02:05.976944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:16.498 [2024-11-29 08:02:05.977341] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83210 ] 00:30:16.498 [2024-11-29 08:02:06.136733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.498 [2024-11-29 08:02:06.241843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.882  [2024-11-29T08:02:08.769Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-29T08:02:09.713Z] Copying: 408/1024 [MB] (220 MBps) [2024-11-29T08:02:10.659Z] Copying: 639/1024 [MB] (231 MBps) [2024-11-29T08:02:11.602Z] Copying: 865/1024 [MB] (226 MBps) [2024-11-29T08:02:12.175Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:30:22.231 00:30:22.231 Calculate MD5 checksum, iteration 2 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:22.231 08:02:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:22.231 [2024-11-29 08:02:11.950123] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:22.231 [2024-11-29 08:02:11.950374] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83274 ] 00:30:22.231 [2024-11-29 08:02:12.107481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.492 [2024-11-29 08:02:12.210319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.878  [2024-11-29T08:02:14.394Z] Copying: 599/1024 [MB] (599 MBps) [2024-11-29T08:02:15.342Z] Copying: 1024/1024 [MB] (average 604 MBps) 00:30:25.398 00:30:25.398 08:02:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:25.398 08:02:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5589ba2672c85e3cc8507c288c8b2d78 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:27.942 [2024-11-29 08:02:17.486385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.942 [2024-11-29 08:02:17.486424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.942 [2024-11-29 08:02:17.486435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:27.942 [2024-11-29 08:02:17.486442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.942 [2024-11-29 08:02:17.486471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.942 [2024-11-29 08:02:17.486480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.942 [2024-11-29 08:02:17.486487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:27.942 [2024-11-29 08:02:17.486493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.942 [2024-11-29 08:02:17.486510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.942 [2024-11-29 08:02:17.486516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.942 [2024-11-29 08:02:17.486522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.942 [2024-11-29 08:02:17.486528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.942 [2024-11-29 08:02:17.486576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.180 ms, result 0 00:30:27.942 true 00:30:27.942 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:27.942 { 00:30:27.942 "name": "ftl", 00:30:27.942 "properties": [ 00:30:27.942 { 00:30:27.942 "name": "superblock_version", 00:30:27.942 "value": 5, 00:30:27.942 "read-only": true 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "name": "base_device", 00:30:27.942 "bands": [ 00:30:27.942 { 00:30:27.942 "id": 0, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 1, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 2, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 3, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 4, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 5, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 6, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 7, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 8, 00:30:27.942 "state": "FREE", 00:30:27.942 "validity": 0.0 00:30:27.942 }, 00:30:27.942 { 00:30:27.942 "id": 9, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 10, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 11, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 12, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 13, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 14, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 15, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 16, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 17, 00:30:27.943 "state": "FREE", 00:30:27.943 "validity": 0.0 00:30:27.943 } 00:30:27.943 ], 00:30:27.943 "read-only": true 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "name": "cache_device", 00:30:27.943 "type": "bdev", 00:30:27.943 "chunks": [ 00:30:27.943 { 00:30:27.943 "id": 0, 00:30:27.943 "state": "INACTIVE", 00:30:27.943 "utilization": 0.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 1, 00:30:27.943 "state": "CLOSED", 00:30:27.943 "utilization": 1.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 2, 00:30:27.943 "state": "CLOSED", 00:30:27.943 "utilization": 1.0 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 3, 00:30:27.943 "state": "OPEN", 00:30:27.943 "utilization": 0.001953125 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "id": 4, 00:30:27.943 "state": "OPEN", 00:30:27.943 "utilization": 0.0 00:30:27.943 } 00:30:27.943 ], 00:30:27.943 "read-only": true 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "name": "verbose_mode", 00:30:27.943 "value": true, 00:30:27.943 "unit": "", 00:30:27.943 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:27.943 }, 00:30:27.943 { 00:30:27.943 "name": "prep_upgrade_on_shutdown", 00:30:27.943 "value": false, 00:30:27.943 "unit": "", 00:30:27.943 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:27.943 } 00:30:27.943 ] 00:30:27.943 } 00:30:27.943 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:28.202 [2024-11-29 08:02:17.898739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.202 [2024-11-29 08:02:17.898773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:28.202 [2024-11-29 08:02:17.898782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.202 [2024-11-29 08:02:17.898788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.202 [2024-11-29 08:02:17.898804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.202 [2024-11-29 08:02:17.898811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:28.202 [2024-11-29 08:02:17.898817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.202 [2024-11-29 08:02:17.898824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.202 [2024-11-29 08:02:17.898837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.202 [2024-11-29 08:02:17.898843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:28.202 [2024-11-29 08:02:17.898850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.202 [2024-11-29 08:02:17.898855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.202 [2024-11-29 08:02:17.898895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.149 ms, result 0 00:30:28.202 true 00:30:28.202 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:28.202 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:28.202 08:02:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.202 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:28.202 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:28.202 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:28.461 [2024-11-29 08:02:18.307085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.461 [2024-11-29 08:02:18.307116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:28.461 [2024-11-29 08:02:18.307124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.461 [2024-11-29 08:02:18.307130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.461 [2024-11-29 08:02:18.307147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.461 [2024-11-29 08:02:18.307153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:28.461 [2024-11-29 08:02:18.307159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.461 [2024-11-29 08:02:18.307164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.461 [2024-11-29 08:02:18.307179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.461 [2024-11-29 08:02:18.307185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:28.461 [2024-11-29 08:02:18.307191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.461 [2024-11-29 08:02:18.307196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.461 [2024-11-29 08:02:18.307236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.145 ms, result 0 00:30:28.461 true 00:30:28.461 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.720 { 00:30:28.720 "name": "ftl", 00:30:28.720 "properties": [ 00:30:28.720 { 00:30:28.720 "name": "superblock_version", 00:30:28.720 "value": 5, 00:30:28.720 "read-only": true 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "name": "base_device", 00:30:28.720 "bands": [ 00:30:28.720 { 00:30:28.720 "id": 0, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 1, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 2, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 3, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 4, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 5, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 6, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 7, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 8, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 9, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 10, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 11, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 12, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 13, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 14, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 15, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 16, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 17, 00:30:28.720 "state": "FREE", 00:30:28.720 "validity": 0.0 00:30:28.720 } 00:30:28.720 ], 00:30:28.720 "read-only": true 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "name": "cache_device", 00:30:28.720 "type": "bdev", 00:30:28.720 "chunks": [ 00:30:28.720 { 00:30:28.720 "id": 0, 00:30:28.720 "state": "INACTIVE", 00:30:28.720 "utilization": 0.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 1, 00:30:28.720 "state": "CLOSED", 00:30:28.720 "utilization": 1.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 2, 00:30:28.720 "state": "CLOSED", 00:30:28.720 "utilization": 1.0 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 3, 00:30:28.720 "state": "OPEN", 00:30:28.720 "utilization": 0.001953125 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "id": 4, 00:30:28.720 "state": "OPEN", 00:30:28.720 "utilization": 0.0 00:30:28.720 } 00:30:28.720 ], 00:30:28.720 "read-only": true 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "name": "verbose_mode", 00:30:28.720 "value": true, 00:30:28.720 "unit": "", 00:30:28.720 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:28.720 }, 00:30:28.720 { 00:30:28.720 "name": "prep_upgrade_on_shutdown", 00:30:28.720 "value": true, 00:30:28.720 "unit": "", 00:30:28.720 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:28.720 } 00:30:28.720 ] 00:30:28.720 } 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82920 ]] 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82920 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82920 ']' 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82920 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82920 00:30:28.720 killing process with pid 82920 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82920' 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82920 00:30:28.720 08:02:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82920 00:30:29.287 [2024-11-29 08:02:19.069077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:29.287 [2024-11-29 08:02:19.080729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.287 [2024-11-29 08:02:19.080765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:29.287 [2024-11-29 08:02:19.080774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:29.287 [2024-11-29 08:02:19.080781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.287 [2024-11-29 08:02:19.080799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:29.287 [2024-11-29 08:02:19.082899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.287 [2024-11-29 08:02:19.082923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:29.287 [2024-11-29 08:02:19.082932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.089 ms 00:30:29.287 [2024-11-29 08:02:19.082939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.459 [2024-11-29 08:02:27.299675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.459 [2024-11-29 08:02:27.299763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:37.459 [2024-11-29 08:02:27.299788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8216.680 ms 00:30:37.459 [2024-11-29 08:02:27.299799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.459 [2024-11-29 08:02:27.301405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.459 [2024-11-29 08:02:27.301470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:37.459 [2024-11-29 08:02:27.301484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.586 ms 00:30:37.459 [2024-11-29 08:02:27.301493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.459 [2024-11-29 08:02:27.302621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.459 [2024-11-29 08:02:27.302651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:37.459 [2024-11-29 08:02:27.302663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:30:37.459 [2024-11-29 08:02:27.302681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.459 [2024-11-29 08:02:27.314763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.459 [2024-11-29 08:02:27.314821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:37.459 [2024-11-29 08:02:27.314835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.040 ms 00:30:37.459 [2024-11-29 08:02:27.314844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.459 [2024-11-29 08:02:27.322902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.322957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:37.460 [2024-11-29 08:02:27.322971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.005 ms 00:30:37.460 [2024-11-29 08:02:27.322981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.323089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.323109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:37.460 [2024-11-29 08:02:27.323121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:37.460 [2024-11-29 08:02:27.323131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.333838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.333887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:37.460 [2024-11-29 08:02:27.333900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.685 ms 00:30:37.460 [2024-11-29 08:02:27.333909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.344858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.344907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:37.460 [2024-11-29 08:02:27.344919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.897 ms 00:30:37.460 [2024-11-29 08:02:27.344928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.355476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.355526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:37.460 [2024-11-29 08:02:27.355538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.496 ms 00:30:37.460 [2024-11-29 08:02:27.355546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.366151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.366201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:37.460 [2024-11-29 08:02:27.366212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.514 ms 00:30:37.460 [2024-11-29 08:02:27.366220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.366267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:37.460 [2024-11-29 08:02:27.366296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:37.460 [2024-11-29 08:02:27.366308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:37.460 [2024-11-29 08:02:27.366318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:37.460 [2024-11-29 08:02:27.366327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:37.460 [2024-11-29 08:02:27.366472] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:37.460 [2024-11-29 08:02:27.366481] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b15728e8-ebda-4091-89bb-fcd808674f5c 00:30:37.460 [2024-11-29 08:02:27.366490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:37.460 [2024-11-29 08:02:27.366497] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:37.460 [2024-11-29 08:02:27.366505] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:37.460 [2024-11-29 08:02:27.366518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:37.460 [2024-11-29 08:02:27.366530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:37.460 [2024-11-29 08:02:27.366540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:37.460 [2024-11-29 08:02:27.366553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:37.460 [2024-11-29 08:02:27.366561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:37.460 [2024-11-29 08:02:27.366570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:37.460 [2024-11-29 08:02:27.366587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.366597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:37.460 [2024-11-29 08:02:27.366607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:30:37.460 [2024-11-29 08:02:27.366615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.381345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.381392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:37.460 [2024-11-29 08:02:27.381413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.711 ms 00:30:37.460 [2024-11-29 08:02:27.381423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.460 [2024-11-29 08:02:27.381861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.460 [2024-11-29 08:02:27.381958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:37.460 [2024-11-29 08:02:27.381969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:30:37.460 [2024-11-29 08:02:27.381978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.721 [2024-11-29 08:02:27.432210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.721 [2024-11-29 08:02:27.432269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:37.721 [2024-11-29 08:02:27.432282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.721 [2024-11-29 08:02:27.432291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.721 [2024-11-29 08:02:27.432330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.721 [2024-11-29 08:02:27.432340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:37.721 [2024-11-29 08:02:27.432350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.721 [2024-11-29 08:02:27.432359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.721 [2024-11-29 08:02:27.432460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.432473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:37.722 [2024-11-29 08:02:27.432491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.432501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.432520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.432531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:37.722 [2024-11-29 08:02:27.432541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.432550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.524144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.524205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:37.722 [2024-11-29 08:02:27.524229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.524239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:37.722 [2024-11-29 08:02:27.598416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.598427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:37.722 [2024-11-29 08:02:27.598599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.598616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:37.722 [2024-11-29 08:02:27.598685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.598694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:37.722 [2024-11-29 08:02:27.598824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.598834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:37.722 [2024-11-29 08:02:27.598904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.598913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.598966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.598991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:37.722 [2024-11-29 08:02:27.599002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.599012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.599077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:37.722 [2024-11-29 08:02:27.599092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:37.722 [2024-11-29 08:02:27.599102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:37.722 [2024-11-29 08:02:27.599110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.722 [2024-11-29 08:02:27.599278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8518.462 ms, result 0 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83458 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83458 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83458 ']' 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.010 08:02:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:43.010 [2024-11-29 08:02:32.869896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:43.011 [2024-11-29 08:02:32.870053] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83458 ] 00:30:43.272 [2024-11-29 08:02:33.035717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.273 [2024-11-29 08:02:33.170324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.219 [2024-11-29 08:02:33.943076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:44.219 [2024-11-29 08:02:33.943163] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:44.219 [2024-11-29 08:02:34.096936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.096999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:44.219 [2024-11-29 08:02:34.097015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.219 [2024-11-29 08:02:34.097024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.097094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.097105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:44.219 [2024-11-29 08:02:34.097114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:44.219 [2024-11-29 08:02:34.097122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.097147] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:44.219 [2024-11-29 08:02:34.097901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:44.219 [2024-11-29 08:02:34.097934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.097944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:44.219 [2024-11-29 08:02:34.097953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.793 ms 00:30:44.219 [2024-11-29 08:02:34.097961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.099731] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:44.219 [2024-11-29 08:02:34.113986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.114045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:44.219 [2024-11-29 08:02:34.114059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.257 ms 00:30:44.219 [2024-11-29 08:02:34.114067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.114149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.114159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:44.219 [2024-11-29 08:02:34.114169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:44.219 [2024-11-29 08:02:34.114177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.122684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.122727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:44.219 [2024-11-29 08:02:34.122737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.418 ms 00:30:44.219 [2024-11-29 08:02:34.122745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.122818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.122828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:44.219 [2024-11-29 08:02:34.122837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:30:44.219 [2024-11-29 08:02:34.122845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.122895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.122909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:44.219 [2024-11-29 08:02:34.122919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:44.219 [2024-11-29 08:02:34.122926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.122953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:44.219 [2024-11-29 08:02:34.127082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.127122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:44.219 [2024-11-29 08:02:34.127137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.134 ms 00:30:44.219 [2024-11-29 08:02:34.127146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.127176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.127185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:44.219 [2024-11-29 08:02:34.127194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:44.219 [2024-11-29 08:02:34.127201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.127256] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:44.219 [2024-11-29 08:02:34.127282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:44.219 [2024-11-29 08:02:34.127322] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:44.219 [2024-11-29 08:02:34.127339] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:44.219 [2024-11-29 08:02:34.127460] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:44.219 [2024-11-29 08:02:34.127473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:44.219 [2024-11-29 08:02:34.127483] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:44.219 [2024-11-29 08:02:34.127493] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:44.219 [2024-11-29 08:02:34.127505] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:44.219 [2024-11-29 08:02:34.127515] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:44.219 [2024-11-29 08:02:34.127522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:44.219 [2024-11-29 08:02:34.127530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:44.219 [2024-11-29 08:02:34.127537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:44.219 [2024-11-29 08:02:34.127545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.127552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:44.219 [2024-11-29 08:02:34.127561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:30:44.219 [2024-11-29 08:02:34.127569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.127656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.219 [2024-11-29 08:02:34.127665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:44.219 [2024-11-29 08:02:34.127677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:30:44.219 [2024-11-29 08:02:34.127685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.219 [2024-11-29 08:02:34.127789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:44.219 [2024-11-29 08:02:34.127808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:44.219 [2024-11-29 08:02:34.127817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:44.219 [2024-11-29 08:02:34.127826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.219 [2024-11-29 08:02:34.127834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:44.219 [2024-11-29 08:02:34.127842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:44.219 [2024-11-29 08:02:34.127850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:44.219 [2024-11-29 08:02:34.127857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:44.219 [2024-11-29 08:02:34.127864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:44.219 [2024-11-29 08:02:34.127871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.219 [2024-11-29 08:02:34.127878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:44.219 [2024-11-29 08:02:34.127886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:44.219 [2024-11-29 08:02:34.127893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.219 [2024-11-29 08:02:34.127906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:44.219 [2024-11-29 08:02:34.127913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:44.219 [2024-11-29 08:02:34.127921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.219 [2024-11-29 08:02:34.127927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:44.220 [2024-11-29 08:02:34.127934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:44.220 [2024-11-29 08:02:34.127941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.127948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:44.220 [2024-11-29 08:02:34.127956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:44.220 [2024-11-29 08:02:34.127962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.220 [2024-11-29 08:02:34.127969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:44.220 [2024-11-29 08:02:34.127984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:44.220 [2024-11-29 08:02:34.127990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.220 [2024-11-29 08:02:34.127997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:44.220 [2024-11-29 08:02:34.128004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:44.220 [2024-11-29 08:02:34.128011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.220 [2024-11-29 08:02:34.128017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:44.220 [2024-11-29 08:02:34.128024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:44.220 [2024-11-29 08:02:34.128030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:44.220 [2024-11-29 08:02:34.128037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:44.220 [2024-11-29 08:02:34.128043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:44.220 [2024-11-29 08:02:34.128049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:44.220 [2024-11-29 08:02:34.128062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:44.220 [2024-11-29 08:02:34.128069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:44.220 [2024-11-29 08:02:34.128082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:44.220 [2024-11-29 08:02:34.128100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:44.220 [2024-11-29 08:02:34.128106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:44.220 [2024-11-29 08:02:34.128121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:44.220 [2024-11-29 08:02:34.128132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:44.220 [2024-11-29 08:02:34.128142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:44.220 [2024-11-29 08:02:34.128150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:44.220 [2024-11-29 08:02:34.128156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:44.220 [2024-11-29 08:02:34.128163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:44.220 [2024-11-29 08:02:34.128170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:44.220 [2024-11-29 08:02:34.128177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:44.220 [2024-11-29 08:02:34.128183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:44.220 [2024-11-29 08:02:34.128192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:44.220 [2024-11-29 08:02:34.128201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:44.220 [2024-11-29 08:02:34.128217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:44.220 [2024-11-29 08:02:34.128239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:44.220 [2024-11-29 08:02:34.128246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:44.220 [2024-11-29 08:02:34.128254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:44.220 [2024-11-29 08:02:34.128261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:44.220 [2024-11-29 08:02:34.128311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:44.220 [2024-11-29 08:02:34.128319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:44.220 [2024-11-29 08:02:34.128334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:44.220 [2024-11-29 08:02:34.128342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:44.220 [2024-11-29 08:02:34.128350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:44.220 [2024-11-29 08:02:34.128358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.220 [2024-11-29 08:02:34.128365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:44.220 [2024-11-29 08:02:34.128380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:30:44.220 [2024-11-29 08:02:34.128387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.220 [2024-11-29 08:02:34.128430] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:44.220 [2024-11-29 08:02:34.128463] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:48.430 [2024-11-29 08:02:38.283028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.283105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:48.430 [2024-11-29 08:02:38.283125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4154.581 ms 00:30:48.430 [2024-11-29 08:02:38.283135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.314973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.315035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:48.430 [2024-11-29 08:02:38.315049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.574 ms 00:30:48.430 [2024-11-29 08:02:38.315058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.315160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.315172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:48.430 [2024-11-29 08:02:38.315182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:48.430 [2024-11-29 08:02:38.315192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.350683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.350729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:48.430 [2024-11-29 08:02:38.350744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.434 ms 00:30:48.430 [2024-11-29 08:02:38.350753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.350800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.350810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:48.430 [2024-11-29 08:02:38.350819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:48.430 [2024-11-29 08:02:38.350827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.351426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.351491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:48.430 [2024-11-29 08:02:38.351504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:30:48.430 [2024-11-29 08:02:38.351521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.351570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.351580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:48.430 [2024-11-29 08:02:38.351589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:48.430 [2024-11-29 08:02:38.351597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.430 [2024-11-29 08:02:38.369620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.430 [2024-11-29 08:02:38.369664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:48.430 [2024-11-29 08:02:38.369677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.996 ms 00:30:48.430 [2024-11-29 08:02:38.369686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.404508] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:48.693 [2024-11-29 08:02:38.404569] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:48.693 [2024-11-29 08:02:38.404587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.404597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:48.693 [2024-11-29 08:02:38.404608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.776 ms 00:30:48.693 [2024-11-29 08:02:38.404616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.419814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.419867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:48.693 [2024-11-29 08:02:38.419881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.135 ms 00:30:48.693 [2024-11-29 08:02:38.419890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.432683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.432730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:48.693 [2024-11-29 08:02:38.432743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.733 ms 00:30:48.693 [2024-11-29 08:02:38.432751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.445252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.445310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:48.693 [2024-11-29 08:02:38.445323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.449 ms 00:30:48.693 [2024-11-29 08:02:38.445330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.446018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.446050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:48.693 [2024-11-29 08:02:38.446061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:30:48.693 [2024-11-29 08:02:38.446069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.511350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.511417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:48.693 [2024-11-29 08:02:38.511433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.260 ms 00:30:48.693 [2024-11-29 08:02:38.511468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.522571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:48.693 [2024-11-29 08:02:38.523606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.523645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:48.693 [2024-11-29 08:02:38.523657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.074 ms 00:30:48.693 [2024-11-29 08:02:38.523665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.523763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.523779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:48.693 [2024-11-29 08:02:38.523790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:48.693 [2024-11-29 08:02:38.523799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.523881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.523893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:48.693 [2024-11-29 08:02:38.523902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:48.693 [2024-11-29 08:02:38.523911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.523939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.523948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:48.693 [2024-11-29 08:02:38.523961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:48.693 [2024-11-29 08:02:38.523969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.524003] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:48.693 [2024-11-29 08:02:38.524013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.524022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:48.693 [2024-11-29 08:02:38.524030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:48.693 [2024-11-29 08:02:38.524038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.549140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.549193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:48.693 [2024-11-29 08:02:38.549206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.075 ms 00:30:48.693 [2024-11-29 08:02:38.549214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.549329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.693 [2024-11-29 08:02:38.549341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:48.693 [2024-11-29 08:02:38.549351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:30:48.693 [2024-11-29 08:02:38.549359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.693 [2024-11-29 08:02:38.550816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4453.349 ms, result 0 00:30:48.693 [2024-11-29 08:02:38.565590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.693 [2024-11-29 08:02:38.581606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:48.693 [2024-11-29 08:02:38.589817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:49.266 08:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.266 08:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:49.266 08:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:49.266 08:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:49.266 08:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:49.266 [2024-11-29 08:02:39.086073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.266 [2024-11-29 08:02:39.086131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:49.266 [2024-11-29 08:02:39.086149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:49.266 [2024-11-29 08:02:39.086158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.266 [2024-11-29 08:02:39.086183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.266 [2024-11-29 08:02:39.086192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:49.266 [2024-11-29 08:02:39.086201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:49.266 [2024-11-29 08:02:39.086209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.266 [2024-11-29 08:02:39.086229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.266 [2024-11-29 08:02:39.086238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:49.266 [2024-11-29 08:02:39.086247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:49.266 [2024-11-29 08:02:39.086255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.266 [2024-11-29 08:02:39.086315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.234 ms, result 0 00:30:49.266 true 00:30:49.266 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:49.528 { 00:30:49.528 "name": "ftl", 00:30:49.528 "properties": [ 00:30:49.528 { 00:30:49.528 "name": "superblock_version", 00:30:49.528 "value": 5, 00:30:49.528 "read-only": true 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "name": "base_device", 00:30:49.528 "bands": [ 00:30:49.528 { 00:30:49.528 "id": 0, 00:30:49.528 "state": "CLOSED", 00:30:49.528 "validity": 1.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 1, 00:30:49.528 "state": "CLOSED", 00:30:49.528 "validity": 1.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 2, 00:30:49.528 "state": "CLOSED", 00:30:49.528 "validity": 0.007843137254901933 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 3, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 4, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 5, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 6, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 7, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 8, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 9, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 10, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 11, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 12, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 13, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 14, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 15, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 16, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 17, 00:30:49.528 "state": "FREE", 00:30:49.528 "validity": 0.0 00:30:49.528 } 00:30:49.528 ], 00:30:49.528 "read-only": true 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "name": "cache_device", 00:30:49.528 "type": "bdev", 00:30:49.528 "chunks": [ 00:30:49.528 { 00:30:49.528 "id": 0, 00:30:49.528 "state": "INACTIVE", 00:30:49.528 "utilization": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 1, 00:30:49.528 "state": "OPEN", 00:30:49.528 "utilization": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 2, 00:30:49.528 "state": "OPEN", 00:30:49.528 "utilization": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 3, 00:30:49.528 "state": "FREE", 00:30:49.528 "utilization": 0.0 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "id": 4, 00:30:49.528 "state": "FREE", 00:30:49.528 "utilization": 0.0 00:30:49.528 } 00:30:49.528 ], 00:30:49.528 "read-only": true 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "name": "verbose_mode", 00:30:49.528 "value": true, 00:30:49.528 "unit": "", 00:30:49.528 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:49.528 }, 00:30:49.528 { 00:30:49.528 "name": "prep_upgrade_on_shutdown", 00:30:49.528 "value": false, 00:30:49.528 "unit": "", 00:30:49.528 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:49.528 } 00:30:49.528 ] 00:30:49.528 } 00:30:49.528 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:49.528 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:49.528 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:49.789 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:49.789 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:49.789 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:49.789 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:49.789 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:50.050 Validate MD5 checksum, iteration 1 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:50.050 08:02:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:50.050 [2024-11-29 08:02:39.870652] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:50.050 [2024-11-29 08:02:39.870800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83552 ] 00:30:50.311 [2024-11-29 08:02:40.036888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.311 [2024-11-29 08:02:40.198670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.226  [2024-11-29T08:02:42.743Z] Copying: 576/1024 [MB] (576 MBps) [2024-11-29T08:02:44.136Z] Copying: 1024/1024 [MB] (average 575 MBps) 00:30:54.192 00:30:54.192 08:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:54.192 08:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.116 Validate MD5 checksum, iteration 2 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=093fe01f577cf1faa8a5480859b8f92b 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 093fe01f577cf1faa8a5480859b8f92b != \0\9\3\f\e\0\1\f\5\7\7\c\f\1\f\a\a\8\a\5\4\8\0\8\5\9\b\8\f\9\2\b ]] 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:56.116 08:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:56.116 [2024-11-29 08:02:45.944361] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:56.116 [2024-11-29 08:02:45.944494] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83619 ] 00:30:56.374 [2024-11-29 08:02:46.104417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.374 [2024-11-29 08:02:46.197068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.757  [2024-11-29T08:02:48.268Z] Copying: 701/1024 [MB] (701 MBps) [2024-11-29T08:02:49.206Z] Copying: 1024/1024 [MB] (average 689 MBps) 00:30:59.262 00:30:59.262 08:02:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:59.262 08:02:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5589ba2672c85e3cc8507c288c8b2d78 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5589ba2672c85e3cc8507c288c8b2d78 != \5\5\8\9\b\a\2\6\7\2\c\8\5\e\3\c\c\8\5\0\7\c\2\8\8\c\8\b\2\d\7\8 ]] 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83458 ]] 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83458 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83675 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83675 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83675 ']' 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.171 08:02:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:01.171 [2024-11-29 08:02:50.971496] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:01.171 [2024-11-29 08:02:50.971616] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83675 ] 00:31:01.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83458 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:01.430 [2024-11-29 08:02:51.127033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.430 [2024-11-29 08:02:51.210294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.998 [2024-11-29 08:02:51.781071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:01.998 [2024-11-29 08:02:51.781125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:01.998 [2024-11-29 08:02:51.923955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.998 [2024-11-29 08:02:51.923992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:01.998 [2024-11-29 08:02:51.924003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:01.998 [2024-11-29 08:02:51.924010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.998 [2024-11-29 08:02:51.924052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.998 [2024-11-29 08:02:51.924060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:01.998 [2024-11-29 08:02:51.924067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:01.998 [2024-11-29 08:02:51.924073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.998 [2024-11-29 08:02:51.924087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:01.998 [2024-11-29 08:02:51.924628] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:01.998 [2024-11-29 08:02:51.924646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.998 [2024-11-29 08:02:51.924653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:01.998 [2024-11-29 08:02:51.924659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:31:01.998 [2024-11-29 08:02:51.924665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.998 [2024-11-29 08:02:51.924891] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:01.998 [2024-11-29 08:02:51.937119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.998 [2024-11-29 08:02:51.937151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:01.998 [2024-11-29 08:02:51.937160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.229 ms 00:31:01.998 [2024-11-29 08:02:51.937167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.943890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.943921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:02.260 [2024-11-29 08:02:51.943928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:02.260 [2024-11-29 08:02:51.943934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.944173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.944188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:02.260 [2024-11-29 08:02:51.944195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:31:02.260 [2024-11-29 08:02:51.944201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.944239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.944246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:02.260 [2024-11-29 08:02:51.944253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:02.260 [2024-11-29 08:02:51.944258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.944277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.944284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:02.260 [2024-11-29 08:02:51.944291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:02.260 [2024-11-29 08:02:51.944296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.944311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:02.260 [2024-11-29 08:02:51.946531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.946556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:02.260 [2024-11-29 08:02:51.946563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.223 ms 00:31:02.260 [2024-11-29 08:02:51.946571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.946589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.946597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:02.260 [2024-11-29 08:02:51.946603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:02.260 [2024-11-29 08:02:51.946609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.946625] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:02.260 [2024-11-29 08:02:51.946639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:02.260 [2024-11-29 08:02:51.946666] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:02.260 [2024-11-29 08:02:51.946682] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:02.260 [2024-11-29 08:02:51.946760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:02.260 [2024-11-29 08:02:51.946769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:02.260 [2024-11-29 08:02:51.946777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:02.260 [2024-11-29 08:02:51.946785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:02.260 [2024-11-29 08:02:51.946792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:02.260 [2024-11-29 08:02:51.946798] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:02.260 [2024-11-29 08:02:51.946804] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:02.260 [2024-11-29 08:02:51.946810] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:02.260 [2024-11-29 08:02:51.946815] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:02.260 [2024-11-29 08:02:51.946824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.946830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:02.260 [2024-11-29 08:02:51.946836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:31:02.260 [2024-11-29 08:02:51.946841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.946906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.260 [2024-11-29 08:02:51.946913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:02.260 [2024-11-29 08:02:51.946918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:02.260 [2024-11-29 08:02:51.946924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.260 [2024-11-29 08:02:51.947006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:02.260 [2024-11-29 08:02:51.947022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:02.260 [2024-11-29 08:02:51.947029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:02.260 [2024-11-29 08:02:51.947035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:02.260 [2024-11-29 08:02:51.947047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:02.260 [2024-11-29 08:02:51.947058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:02.260 [2024-11-29 08:02:51.947063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:02.260 [2024-11-29 08:02:51.947068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:02.260 [2024-11-29 08:02:51.947080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:02.260 [2024-11-29 08:02:51.947085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:02.260 [2024-11-29 08:02:51.947096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:02.260 [2024-11-29 08:02:51.947101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:02.260 [2024-11-29 08:02:51.947110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:02.260 [2024-11-29 08:02:51.947115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.260 [2024-11-29 08:02:51.947120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:02.260 [2024-11-29 08:02:51.947125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:02.260 [2024-11-29 08:02:51.947134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.260 [2024-11-29 08:02:51.947139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:02.261 [2024-11-29 08:02:51.947144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:02.261 [2024-11-29 08:02:51.947149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:02.261 [2024-11-29 08:02:51.947159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:02.261 [2024-11-29 08:02:51.947163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:02.261 [2024-11-29 08:02:51.947173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:02.261 [2024-11-29 08:02:51.947177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:02.261 [2024-11-29 08:02:51.947187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:02.261 [2024-11-29 08:02:51.947193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:02.261 [2024-11-29 08:02:51.947204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:02.261 [2024-11-29 08:02:51.947217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:02.261 [2024-11-29 08:02:51.947232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:02.261 [2024-11-29 08:02:51.947237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947242] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:02.261 [2024-11-29 08:02:51.947248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:02.261 [2024-11-29 08:02:51.947253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:02.261 [2024-11-29 08:02:51.947265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:02.261 [2024-11-29 08:02:51.947270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:02.261 [2024-11-29 08:02:51.947276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:02.261 [2024-11-29 08:02:51.947281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:02.261 [2024-11-29 08:02:51.947286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:02.261 [2024-11-29 08:02:51.947291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:02.261 [2024-11-29 08:02:51.947297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:02.261 [2024-11-29 08:02:51.947304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:02.261 [2024-11-29 08:02:51.947316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:02.261 [2024-11-29 08:02:51.947332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:02.261 [2024-11-29 08:02:51.947337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:02.261 [2024-11-29 08:02:51.947342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:02.261 [2024-11-29 08:02:51.947348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:02.261 [2024-11-29 08:02:51.947383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:02.261 [2024-11-29 08:02:51.947389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:02.261 [2024-11-29 08:02:51.947402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:02.261 [2024-11-29 08:02:51.947407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:02.261 [2024-11-29 08:02:51.947413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:02.261 [2024-11-29 08:02:51.947418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.947424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:02.261 [2024-11-29 08:02:51.947429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:31:02.261 [2024-11-29 08:02:51.947435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.966615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.966648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:02.261 [2024-11-29 08:02:51.966660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.128 ms 00:31:02.261 [2024-11-29 08:02:51.966666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.966696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.966703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:02.261 [2024-11-29 08:02:51.966710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:02.261 [2024-11-29 08:02:51.966716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.990680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.990706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:02.261 [2024-11-29 08:02:51.990713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.924 ms 00:31:02.261 [2024-11-29 08:02:51.990719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.990740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.990747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:02.261 [2024-11-29 08:02:51.990753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:02.261 [2024-11-29 08:02:51.990762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.990830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.990838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:02.261 [2024-11-29 08:02:51.990845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:02.261 [2024-11-29 08:02:51.990850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:51.990880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:51.990888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:02.261 [2024-11-29 08:02:51.990893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:02.261 [2024-11-29 08:02:51.990899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.002256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:52.002284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:02.261 [2024-11-29 08:02:52.002292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.338 ms 00:31:02.261 [2024-11-29 08:02:52.002300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.002368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:52.002377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:02.261 [2024-11-29 08:02:52.002383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:02.261 [2024-11-29 08:02:52.002390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.031772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:52.031807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:02.261 [2024-11-29 08:02:52.031817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.368 ms 00:31:02.261 [2024-11-29 08:02:52.031824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.038915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:52.038950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:02.261 [2024-11-29 08:02:52.038959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:31:02.261 [2024-11-29 08:02:52.038965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.081345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.261 [2024-11-29 08:02:52.081387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:02.261 [2024-11-29 08:02:52.081396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.341 ms 00:31:02.261 [2024-11-29 08:02:52.081402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.261 [2024-11-29 08:02:52.081512] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:02.261 [2024-11-29 08:02:52.081587] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:02.262 [2024-11-29 08:02:52.081658] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:02.262 [2024-11-29 08:02:52.081730] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:02.262 [2024-11-29 08:02:52.081737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.262 [2024-11-29 08:02:52.081744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:02.262 [2024-11-29 08:02:52.081750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:31:02.262 [2024-11-29 08:02:52.081756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.262 [2024-11-29 08:02:52.081798] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:02.262 [2024-11-29 08:02:52.081807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.262 [2024-11-29 08:02:52.081816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:02.262 [2024-11-29 08:02:52.081823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:02.262 [2024-11-29 08:02:52.081829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.262 [2024-11-29 08:02:52.092845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.262 [2024-11-29 08:02:52.092877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:02.262 [2024-11-29 08:02:52.092885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.999 ms 00:31:02.262 [2024-11-29 08:02:52.092892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.262 [2024-11-29 08:02:52.099231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.262 [2024-11-29 08:02:52.099257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:02.262 [2024-11-29 08:02:52.099265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:02.262 [2024-11-29 08:02:52.099271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.262 [2024-11-29 08:02:52.099334] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:02.262 [2024-11-29 08:02:52.099454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.262 [2024-11-29 08:02:52.099470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:02.262 [2024-11-29 08:02:52.099477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:31:02.262 [2024-11-29 08:02:52.099483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.836 [2024-11-29 08:02:52.663828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.836 [2024-11-29 08:02:52.663889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:02.836 [2024-11-29 08:02:52.663904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 563.746 ms 00:31:02.836 [2024-11-29 08:02:52.663913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.836 [2024-11-29 08:02:52.668343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.836 [2024-11-29 08:02:52.668377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:02.836 [2024-11-29 08:02:52.668388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.450 ms 00:31:02.836 [2024-11-29 08:02:52.668402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.836 [2024-11-29 08:02:52.669081] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:02.836 [2024-11-29 08:02:52.669115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.836 [2024-11-29 08:02:52.669124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:02.836 [2024-11-29 08:02:52.669134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:31:02.836 [2024-11-29 08:02:52.669142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.836 [2024-11-29 08:02:52.669207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.836 [2024-11-29 08:02:52.669222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:02.836 [2024-11-29 08:02:52.669231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:02.836 [2024-11-29 08:02:52.669247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.836 [2024-11-29 08:02:52.669282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 569.945 ms, result 0 00:31:02.836 [2024-11-29 08:02:52.669335] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:02.836 [2024-11-29 08:02:52.669406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.836 [2024-11-29 08:02:52.669422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:02.836 [2024-11-29 08:02:52.669434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:31:02.836 [2024-11-29 08:02:52.669455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.289340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.289417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:03.408 [2024-11-29 08:02:53.289464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 618.990 ms 00:31:03.408 [2024-11-29 08:02:53.289473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.293762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.293801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:03.408 [2024-11-29 08:02:53.293811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.279 ms 00:31:03.408 [2024-11-29 08:02:53.293819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.294746] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:03.408 [2024-11-29 08:02:53.294787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.294796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:03.408 [2024-11-29 08:02:53.294805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.936 ms 00:31:03.408 [2024-11-29 08:02:53.294813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.294846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.294856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:03.408 [2024-11-29 08:02:53.294864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:03.408 [2024-11-29 08:02:53.294871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.294909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 625.577 ms, result 0 00:31:03.408 [2024-11-29 08:02:53.294951] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:03.408 [2024-11-29 08:02:53.294963] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:03.408 [2024-11-29 08:02:53.294972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.294980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:03.408 [2024-11-29 08:02:53.294989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1195.652 ms 00:31:03.408 [2024-11-29 08:02:53.294996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.295026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.295046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:03.408 [2024-11-29 08:02:53.295054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:03.408 [2024-11-29 08:02:53.295062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.306473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:03.408 [2024-11-29 08:02:53.306584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.306595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:03.408 [2024-11-29 08:02:53.306606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.505 ms 00:31:03.408 [2024-11-29 08:02:53.306614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.307296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.307324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:03.408 [2024-11-29 08:02:53.307334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.614 ms 00:31:03.408 [2024-11-29 08:02:53.307341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.309608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.309634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:03.408 [2024-11-29 08:02:53.309644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.250 ms 00:31:03.408 [2024-11-29 08:02:53.309653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.309693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.309703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:03.408 [2024-11-29 08:02:53.309714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:03.408 [2024-11-29 08:02:53.309722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.309826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.309845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:03.408 [2024-11-29 08:02:53.309853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:03.408 [2024-11-29 08:02:53.309861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.309882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.309891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:03.408 [2024-11-29 08:02:53.309899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:03.408 [2024-11-29 08:02:53.309906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.309934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:03.408 [2024-11-29 08:02:53.309945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.309952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:03.408 [2024-11-29 08:02:53.309960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:03.408 [2024-11-29 08:02:53.309967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.310019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.408 [2024-11-29 08:02:53.310029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:03.408 [2024-11-29 08:02:53.310036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:03.408 [2024-11-29 08:02:53.310044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.408 [2024-11-29 08:02:53.311050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1386.650 ms, result 0 00:31:03.408 [2024-11-29 08:02:53.323476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.408 [2024-11-29 08:02:53.339454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:03.408 [2024-11-29 08:02:53.347609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:03.669 Validate MD5 checksum, iteration 1 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:03.669 08:02:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:03.669 [2024-11-29 08:02:53.511295] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:03.669 [2024-11-29 08:02:53.511382] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83710 ] 00:31:03.930 [2024-11-29 08:02:53.664821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.930 [2024-11-29 08:02:53.764311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.852  [2024-11-29T08:02:55.796Z] Copying: 721/1024 [MB] (721 MBps) [2024-11-29T08:03:01.063Z] Copying: 1024/1024 [MB] (average 712 MBps) 00:31:11.119 00:31:11.119 08:03:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:11.119 08:03:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:12.527 Validate MD5 checksum, iteration 2 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=093fe01f577cf1faa8a5480859b8f92b 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 093fe01f577cf1faa8a5480859b8f92b != \0\9\3\f\e\0\1\f\5\7\7\c\f\1\f\a\a\8\a\5\4\8\0\8\5\9\b\8\f\9\2\b ]] 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:12.527 08:03:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:12.527 [2024-11-29 08:03:02.279948] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:12.527 [2024-11-29 08:03:02.280197] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83799 ] 00:31:12.527 [2024-11-29 08:03:02.438701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.800 [2024-11-29 08:03:02.544744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.181  [2024-11-29T08:03:05.062Z] Copying: 593/1024 [MB] (593 MBps) [2024-11-29T08:03:06.969Z] Copying: 1024/1024 [MB] (average 597 MBps) 00:31:17.025 00:31:17.025 08:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:17.025 08:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5589ba2672c85e3cc8507c288c8b2d78 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5589ba2672c85e3cc8507c288c8b2d78 != \5\5\8\9\b\a\2\6\7\2\c\8\5\e\3\c\c\8\5\0\7\c\2\8\8\c\8\b\2\d\7\8 ]] 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83675 ]] 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83675 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83675 ']' 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83675 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83675 00:31:18.928 killing process with pid 83675 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83675' 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83675 00:31:18.928 08:03:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83675 00:31:19.496 [2024-11-29 08:03:09.171731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:19.496 [2024-11-29 08:03:09.181723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.181760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:19.496 [2024-11-29 08:03:09.181770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:19.496 [2024-11-29 08:03:09.181777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.181795] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:19.496 [2024-11-29 08:03:09.183897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.183922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:19.496 [2024-11-29 08:03:09.183934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.091 ms 00:31:19.496 [2024-11-29 08:03:09.183940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.184120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.184129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:19.496 [2024-11-29 08:03:09.184136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:31:19.496 [2024-11-29 08:03:09.184142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.185225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.185247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:19.496 [2024-11-29 08:03:09.185254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:31:19.496 [2024-11-29 08:03:09.185264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.186139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.186168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:19.496 [2024-11-29 08:03:09.186175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:31:19.496 [2024-11-29 08:03:09.186181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.193587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.193615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:19.496 [2024-11-29 08:03:09.193627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.381 ms 00:31:19.496 [2024-11-29 08:03:09.193633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.197784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.197812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:19.496 [2024-11-29 08:03:09.197821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.123 ms 00:31:19.496 [2024-11-29 08:03:09.197828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.197893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.197902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:19.496 [2024-11-29 08:03:09.197909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:19.496 [2024-11-29 08:03:09.197918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.204740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.204765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:19.496 [2024-11-29 08:03:09.204772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.810 ms 00:31:19.496 [2024-11-29 08:03:09.204778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.211902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.211928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:19.496 [2024-11-29 08:03:09.211937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.098 ms 00:31:19.496 [2024-11-29 08:03:09.211943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.218647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.218673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:19.496 [2024-11-29 08:03:09.218681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.678 ms 00:31:19.496 [2024-11-29 08:03:09.218688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.225733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.496 [2024-11-29 08:03:09.225759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:19.496 [2024-11-29 08:03:09.225766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.998 ms 00:31:19.496 [2024-11-29 08:03:09.225772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.496 [2024-11-29 08:03:09.225797] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:19.496 [2024-11-29 08:03:09.225808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:19.496 [2024-11-29 08:03:09.225815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:19.496 [2024-11-29 08:03:09.225821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:19.496 [2024-11-29 08:03:09.225827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:19.496 [2024-11-29 08:03:09.225833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:19.496 [2024-11-29 08:03:09.225839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:19.496 [2024-11-29 08:03:09.225845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:19.496 [2024-11-29 08:03:09.225850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:19.497 [2024-11-29 08:03:09.225912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:19.497 [2024-11-29 08:03:09.225918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b15728e8-ebda-4091-89bb-fcd808674f5c 00:31:19.497 [2024-11-29 08:03:09.225924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:19.497 [2024-11-29 08:03:09.225930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:19.497 [2024-11-29 08:03:09.225935] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:19.497 [2024-11-29 08:03:09.225941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:19.497 [2024-11-29 08:03:09.225946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:19.497 [2024-11-29 08:03:09.225953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:19.497 [2024-11-29 08:03:09.225962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:19.497 [2024-11-29 08:03:09.225967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:19.497 [2024-11-29 08:03:09.225974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:19.497 [2024-11-29 08:03:09.225980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.497 [2024-11-29 08:03:09.225985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:19.497 [2024-11-29 08:03:09.225992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:31:19.497 [2024-11-29 08:03:09.225998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.235541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.497 [2024-11-29 08:03:09.235567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:19.497 [2024-11-29 08:03:09.235575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.530 ms 00:31:19.497 [2024-11-29 08:03:09.235581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.235853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.497 [2024-11-29 08:03:09.235861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:19.497 [2024-11-29 08:03:09.235867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:31:19.497 [2024-11-29 08:03:09.235873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.269125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.269153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:19.497 [2024-11-29 08:03:09.269161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.269171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.269194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.269201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:19.497 [2024-11-29 08:03:09.269207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.269214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.269262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.269270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:19.497 [2024-11-29 08:03:09.269276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.269282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.269297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.269303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:19.497 [2024-11-29 08:03:09.269309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.269314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.329582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.329612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:19.497 [2024-11-29 08:03:09.329620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.329626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:19.497 [2024-11-29 08:03:09.378655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:19.497 [2024-11-29 08:03:09.378741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:19.497 [2024-11-29 08:03:09.378799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:19.497 [2024-11-29 08:03:09.378889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:19.497 [2024-11-29 08:03:09.378936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.378968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.378975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:19.497 [2024-11-29 08:03:09.378981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.378988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.379018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:19.497 [2024-11-29 08:03:09.379027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:19.497 [2024-11-29 08:03:09.379033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:19.497 [2024-11-29 08:03:09.379039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.497 [2024-11-29 08:03:09.379129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.384 ms, result 0 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:20.435 Remove shared memory files 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83458 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:20.435 ************************************ 00:31:20.435 END TEST ftl_upgrade_shutdown 00:31:20.435 ************************************ 00:31:20.435 00:31:20.435 real 1m27.106s 00:31:20.435 user 1m58.840s 00:31:20.435 sys 0m19.843s 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.435 08:03:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:20.435 08:03:10 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:20.435 08:03:10 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:20.435 08:03:10 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:20.435 08:03:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.435 08:03:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:20.435 ************************************ 00:31:20.435 START TEST ftl_restore_fast 00:31:20.435 ************************************ 00:31:20.435 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:20.435 * Looking for test storage... 00:31:20.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.436 --rc genhtml_branch_coverage=1 00:31:20.436 --rc genhtml_function_coverage=1 00:31:20.436 --rc genhtml_legend=1 00:31:20.436 --rc geninfo_all_blocks=1 00:31:20.436 --rc geninfo_unexecuted_blocks=1 00:31:20.436 00:31:20.436 ' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.436 --rc genhtml_branch_coverage=1 00:31:20.436 --rc genhtml_function_coverage=1 00:31:20.436 --rc genhtml_legend=1 00:31:20.436 --rc geninfo_all_blocks=1 00:31:20.436 --rc geninfo_unexecuted_blocks=1 00:31:20.436 00:31:20.436 ' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.436 --rc genhtml_branch_coverage=1 00:31:20.436 --rc genhtml_function_coverage=1 00:31:20.436 --rc genhtml_legend=1 00:31:20.436 --rc geninfo_all_blocks=1 00:31:20.436 --rc geninfo_unexecuted_blocks=1 00:31:20.436 00:31:20.436 ' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.436 --rc genhtml_branch_coverage=1 00:31:20.436 --rc genhtml_function_coverage=1 00:31:20.436 --rc genhtml_legend=1 00:31:20.436 --rc geninfo_all_blocks=1 00:31:20.436 --rc geninfo_unexecuted_blocks=1 00:31:20.436 00:31:20.436 ' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.e9NEDLmxPP 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83962 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83962 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83962 ']' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:20.436 08:03:10 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:20.436 [2024-11-29 08:03:10.335846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:20.436 [2024-11-29 08:03:10.335974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83962 ] 00:31:20.696 [2024-11-29 08:03:10.495881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.696 [2024-11-29 08:03:10.579007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:21.263 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:21.522 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:21.781 { 00:31:21.781 "name": "nvme0n1", 00:31:21.781 "aliases": [ 00:31:21.781 "495f957c-b989-4a4f-8007-0e457554aeec" 00:31:21.781 ], 00:31:21.781 "product_name": "NVMe disk", 00:31:21.781 "block_size": 4096, 00:31:21.781 "num_blocks": 1310720, 00:31:21.781 "uuid": "495f957c-b989-4a4f-8007-0e457554aeec", 00:31:21.781 "numa_id": -1, 00:31:21.781 "assigned_rate_limits": { 00:31:21.781 "rw_ios_per_sec": 0, 00:31:21.781 "rw_mbytes_per_sec": 0, 00:31:21.781 "r_mbytes_per_sec": 0, 00:31:21.781 "w_mbytes_per_sec": 0 00:31:21.781 }, 00:31:21.781 "claimed": true, 00:31:21.781 "claim_type": "read_many_write_one", 00:31:21.781 "zoned": false, 00:31:21.781 "supported_io_types": { 00:31:21.781 "read": true, 00:31:21.781 "write": true, 00:31:21.781 "unmap": true, 00:31:21.781 "flush": true, 00:31:21.781 "reset": true, 00:31:21.781 "nvme_admin": true, 00:31:21.781 "nvme_io": true, 00:31:21.781 "nvme_io_md": false, 00:31:21.781 "write_zeroes": true, 00:31:21.781 "zcopy": false, 00:31:21.781 "get_zone_info": false, 00:31:21.781 "zone_management": false, 00:31:21.781 "zone_append": false, 00:31:21.781 "compare": true, 00:31:21.781 "compare_and_write": false, 00:31:21.781 "abort": true, 00:31:21.781 "seek_hole": false, 00:31:21.781 "seek_data": false, 00:31:21.781 "copy": true, 00:31:21.781 "nvme_iov_md": false 00:31:21.781 }, 00:31:21.781 "driver_specific": { 00:31:21.781 "nvme": [ 00:31:21.781 { 00:31:21.781 "pci_address": "0000:00:11.0", 00:31:21.781 "trid": { 00:31:21.781 "trtype": "PCIe", 00:31:21.781 "traddr": "0000:00:11.0" 00:31:21.781 }, 00:31:21.781 "ctrlr_data": { 00:31:21.781 "cntlid": 0, 00:31:21.781 "vendor_id": "0x1b36", 00:31:21.781 "model_number": "QEMU NVMe Ctrl", 00:31:21.781 "serial_number": "12341", 00:31:21.781 "firmware_revision": "8.0.0", 00:31:21.781 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:21.781 "oacs": { 00:31:21.781 "security": 0, 00:31:21.781 "format": 1, 00:31:21.781 "firmware": 0, 00:31:21.781 "ns_manage": 1 00:31:21.781 }, 00:31:21.781 "multi_ctrlr": false, 00:31:21.781 "ana_reporting": false 00:31:21.781 }, 00:31:21.781 "vs": { 00:31:21.781 "nvme_version": "1.4" 00:31:21.781 }, 00:31:21.781 "ns_data": { 00:31:21.781 "id": 1, 00:31:21.781 "can_share": false 00:31:21.781 } 00:31:21.781 } 00:31:21.781 ], 00:31:21.781 "mp_policy": "active_passive" 00:31:21.781 } 00:31:21.781 } 00:31:21.781 ]' 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:21.781 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:22.039 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=55edc61c-77de-487d-8d09-a8071fe7f212 00:31:22.039 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:22.039 08:03:11 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55edc61c-77de-487d-8d09-a8071fe7f212 00:31:22.298 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=cafb0aff-fe81-49e5-bd34-8bd1482b2331 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cafb0aff-fe81-49e5-bd34-8bd1482b2331 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:22.557 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d21571-2253-43d7-a6d5-b52e192be980 00:31:22.816 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:22.816 { 00:31:22.816 "name": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:22.816 "aliases": [ 00:31:22.816 "lvs/nvme0n1p0" 00:31:22.816 ], 00:31:22.816 "product_name": "Logical Volume", 00:31:22.816 "block_size": 4096, 00:31:22.816 "num_blocks": 26476544, 00:31:22.816 "uuid": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:22.816 "assigned_rate_limits": { 00:31:22.816 "rw_ios_per_sec": 0, 00:31:22.816 "rw_mbytes_per_sec": 0, 00:31:22.816 "r_mbytes_per_sec": 0, 00:31:22.816 "w_mbytes_per_sec": 0 00:31:22.816 }, 00:31:22.816 "claimed": false, 00:31:22.816 "zoned": false, 00:31:22.816 "supported_io_types": { 00:31:22.816 "read": true, 00:31:22.816 "write": true, 00:31:22.816 "unmap": true, 00:31:22.816 "flush": false, 00:31:22.816 "reset": true, 00:31:22.816 "nvme_admin": false, 00:31:22.816 "nvme_io": false, 00:31:22.816 "nvme_io_md": false, 00:31:22.816 "write_zeroes": true, 00:31:22.816 "zcopy": false, 00:31:22.816 "get_zone_info": false, 00:31:22.816 "zone_management": false, 00:31:22.816 "zone_append": false, 00:31:22.816 "compare": false, 00:31:22.816 "compare_and_write": false, 00:31:22.816 "abort": false, 00:31:22.816 "seek_hole": true, 00:31:22.816 "seek_data": true, 00:31:22.816 "copy": false, 00:31:22.816 "nvme_iov_md": false 00:31:22.816 }, 00:31:22.816 "driver_specific": { 00:31:22.816 "lvol": { 00:31:22.816 "lvol_store_uuid": "cafb0aff-fe81-49e5-bd34-8bd1482b2331", 00:31:22.816 "base_bdev": "nvme0n1", 00:31:22.816 "thin_provision": true, 00:31:22.816 "num_allocated_clusters": 0, 00:31:22.816 "snapshot": false, 00:31:22.816 "clone": false, 00:31:22.816 "esnap_clone": false 00:31:22.816 } 00:31:22.816 } 00:31:22.816 } 00:31:22.816 ]' 00:31:22.816 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:22.816 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:22.816 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:23.074 08:03:12 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:23.333 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.592 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:23.592 { 00:31:23.592 "name": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:23.592 "aliases": [ 00:31:23.592 "lvs/nvme0n1p0" 00:31:23.592 ], 00:31:23.592 "product_name": "Logical Volume", 00:31:23.592 "block_size": 4096, 00:31:23.592 "num_blocks": 26476544, 00:31:23.592 "uuid": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:23.592 "assigned_rate_limits": { 00:31:23.592 "rw_ios_per_sec": 0, 00:31:23.592 "rw_mbytes_per_sec": 0, 00:31:23.592 "r_mbytes_per_sec": 0, 00:31:23.592 "w_mbytes_per_sec": 0 00:31:23.592 }, 00:31:23.592 "claimed": false, 00:31:23.592 "zoned": false, 00:31:23.592 "supported_io_types": { 00:31:23.592 "read": true, 00:31:23.592 "write": true, 00:31:23.592 "unmap": true, 00:31:23.592 "flush": false, 00:31:23.592 "reset": true, 00:31:23.592 "nvme_admin": false, 00:31:23.592 "nvme_io": false, 00:31:23.592 "nvme_io_md": false, 00:31:23.592 "write_zeroes": true, 00:31:23.592 "zcopy": false, 00:31:23.592 "get_zone_info": false, 00:31:23.592 "zone_management": false, 00:31:23.592 "zone_append": false, 00:31:23.592 "compare": false, 00:31:23.592 "compare_and_write": false, 00:31:23.592 "abort": false, 00:31:23.592 "seek_hole": true, 00:31:23.592 "seek_data": true, 00:31:23.592 "copy": false, 00:31:23.592 "nvme_iov_md": false 00:31:23.592 }, 00:31:23.592 "driver_specific": { 00:31:23.592 "lvol": { 00:31:23.592 "lvol_store_uuid": "cafb0aff-fe81-49e5-bd34-8bd1482b2331", 00:31:23.592 "base_bdev": "nvme0n1", 00:31:23.592 "thin_provision": true, 00:31:23.592 "num_allocated_clusters": 0, 00:31:23.592 "snapshot": false, 00:31:23.593 "clone": false, 00:31:23.593 "esnap_clone": false 00:31:23.593 } 00:31:23.593 } 00:31:23.593 } 00:31:23.593 ]' 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:23.593 08:03:13 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:23.851 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:23.851 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.851 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.851 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9d21571-2253-43d7-a6d5-b52e192be980 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:23.852 { 00:31:23.852 "name": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:23.852 "aliases": [ 00:31:23.852 "lvs/nvme0n1p0" 00:31:23.852 ], 00:31:23.852 "product_name": "Logical Volume", 00:31:23.852 "block_size": 4096, 00:31:23.852 "num_blocks": 26476544, 00:31:23.852 "uuid": "e9d21571-2253-43d7-a6d5-b52e192be980", 00:31:23.852 "assigned_rate_limits": { 00:31:23.852 "rw_ios_per_sec": 0, 00:31:23.852 "rw_mbytes_per_sec": 0, 00:31:23.852 "r_mbytes_per_sec": 0, 00:31:23.852 "w_mbytes_per_sec": 0 00:31:23.852 }, 00:31:23.852 "claimed": false, 00:31:23.852 "zoned": false, 00:31:23.852 "supported_io_types": { 00:31:23.852 "read": true, 00:31:23.852 "write": true, 00:31:23.852 "unmap": true, 00:31:23.852 "flush": false, 00:31:23.852 "reset": true, 00:31:23.852 "nvme_admin": false, 00:31:23.852 "nvme_io": false, 00:31:23.852 "nvme_io_md": false, 00:31:23.852 "write_zeroes": true, 00:31:23.852 "zcopy": false, 00:31:23.852 "get_zone_info": false, 00:31:23.852 "zone_management": false, 00:31:23.852 "zone_append": false, 00:31:23.852 "compare": false, 00:31:23.852 "compare_and_write": false, 00:31:23.852 "abort": false, 00:31:23.852 "seek_hole": true, 00:31:23.852 "seek_data": true, 00:31:23.852 "copy": false, 00:31:23.852 "nvme_iov_md": false 00:31:23.852 }, 00:31:23.852 "driver_specific": { 00:31:23.852 "lvol": { 00:31:23.852 "lvol_store_uuid": "cafb0aff-fe81-49e5-bd34-8bd1482b2331", 00:31:23.852 "base_bdev": "nvme0n1", 00:31:23.852 "thin_provision": true, 00:31:23.852 "num_allocated_clusters": 0, 00:31:23.852 "snapshot": false, 00:31:23.852 "clone": false, 00:31:23.852 "esnap_clone": false 00:31:23.852 } 00:31:23.852 } 00:31:23.852 } 00:31:23.852 ]' 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:23.852 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e9d21571-2253-43d7-a6d5-b52e192be980 --l2p_dram_limit 10' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:24.112 08:03:13 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e9d21571-2253-43d7-a6d5-b52e192be980 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:24.112 [2024-11-29 08:03:14.002550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.002590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:24.112 [2024-11-29 08:03:14.002602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:24.112 [2024-11-29 08:03:14.002609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.002658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.002667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:24.112 [2024-11-29 08:03:14.002675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:24.112 [2024-11-29 08:03:14.002681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.002697] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:24.112 [2024-11-29 08:03:14.003244] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:24.112 [2024-11-29 08:03:14.003260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.003266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:24.112 [2024-11-29 08:03:14.003274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:31:24.112 [2024-11-29 08:03:14.003280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.003364] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:31:24.112 [2024-11-29 08:03:14.004313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.004347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:24.112 [2024-11-29 08:03:14.004355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:24.112 [2024-11-29 08:03:14.004363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.009158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.009193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:24.112 [2024-11-29 08:03:14.009201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:31:24.112 [2024-11-29 08:03:14.009209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.009278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.009287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:24.112 [2024-11-29 08:03:14.009294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:24.112 [2024-11-29 08:03:14.009304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.009338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.009348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:24.112 [2024-11-29 08:03:14.009355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:24.112 [2024-11-29 08:03:14.009377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.009395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:24.112 [2024-11-29 08:03:14.012261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.012286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:24.112 [2024-11-29 08:03:14.012296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:31:24.112 [2024-11-29 08:03:14.012301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.012330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.012336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:24.112 [2024-11-29 08:03:14.012344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:24.112 [2024-11-29 08:03:14.012349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.012373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:24.112 [2024-11-29 08:03:14.012496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:24.112 [2024-11-29 08:03:14.012509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:24.112 [2024-11-29 08:03:14.012518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:24.112 [2024-11-29 08:03:14.012527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:24.112 [2024-11-29 08:03:14.012534] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:24.112 [2024-11-29 08:03:14.012541] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:24.112 [2024-11-29 08:03:14.012548] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:24.112 [2024-11-29 08:03:14.012555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:24.112 [2024-11-29 08:03:14.012560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:24.112 [2024-11-29 08:03:14.012568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.012578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:24.112 [2024-11-29 08:03:14.012586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:31:24.112 [2024-11-29 08:03:14.012592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.012656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.112 [2024-11-29 08:03:14.012663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:24.112 [2024-11-29 08:03:14.012670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:24.112 [2024-11-29 08:03:14.012676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.112 [2024-11-29 08:03:14.012754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:24.112 [2024-11-29 08:03:14.012762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:24.112 [2024-11-29 08:03:14.012769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:24.112 [2024-11-29 08:03:14.012775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.112 [2024-11-29 08:03:14.012782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:24.112 [2024-11-29 08:03:14.012787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:24.112 [2024-11-29 08:03:14.012794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:24.112 [2024-11-29 08:03:14.012798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:24.112 [2024-11-29 08:03:14.012805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:24.112 [2024-11-29 08:03:14.012810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:24.112 [2024-11-29 08:03:14.012816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:24.112 [2024-11-29 08:03:14.012821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:24.112 [2024-11-29 08:03:14.012827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:24.113 [2024-11-29 08:03:14.012832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:24.113 [2024-11-29 08:03:14.012838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:24.113 [2024-11-29 08:03:14.012843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:24.113 [2024-11-29 08:03:14.012857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:24.113 [2024-11-29 08:03:14.012863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:24.113 [2024-11-29 08:03:14.012874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.113 [2024-11-29 08:03:14.012887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:24.113 [2024-11-29 08:03:14.012893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.113 [2024-11-29 08:03:14.012904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:24.113 [2024-11-29 08:03:14.012910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.113 [2024-11-29 08:03:14.012921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:24.113 [2024-11-29 08:03:14.012926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.113 [2024-11-29 08:03:14.012937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:24.113 [2024-11-29 08:03:14.012945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:24.113 [2024-11-29 08:03:14.012956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:24.113 [2024-11-29 08:03:14.012962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:24.113 [2024-11-29 08:03:14.012968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:24.113 [2024-11-29 08:03:14.012973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:24.113 [2024-11-29 08:03:14.012979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:24.113 [2024-11-29 08:03:14.012984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.113 [2024-11-29 08:03:14.012990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:24.113 [2024-11-29 08:03:14.012995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:24.113 [2024-11-29 08:03:14.013001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.113 [2024-11-29 08:03:14.013005] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:24.113 [2024-11-29 08:03:14.013014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:24.113 [2024-11-29 08:03:14.013019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:24.113 [2024-11-29 08:03:14.013025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.113 [2024-11-29 08:03:14.013031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:24.113 [2024-11-29 08:03:14.013039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:24.113 [2024-11-29 08:03:14.013044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:24.113 [2024-11-29 08:03:14.013050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:24.113 [2024-11-29 08:03:14.013054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:24.113 [2024-11-29 08:03:14.013060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:24.113 [2024-11-29 08:03:14.013068] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:24.113 [2024-11-29 08:03:14.013078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:24.113 [2024-11-29 08:03:14.013092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:24.113 [2024-11-29 08:03:14.013097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:24.113 [2024-11-29 08:03:14.013104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:24.113 [2024-11-29 08:03:14.013109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:24.113 [2024-11-29 08:03:14.013116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:24.113 [2024-11-29 08:03:14.013121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:24.113 [2024-11-29 08:03:14.013128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:24.113 [2024-11-29 08:03:14.013133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:24.113 [2024-11-29 08:03:14.013141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:24.113 [2024-11-29 08:03:14.013172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:24.113 [2024-11-29 08:03:14.013179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:24.113 [2024-11-29 08:03:14.013192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:24.113 [2024-11-29 08:03:14.013197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:24.113 [2024-11-29 08:03:14.013204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:24.113 [2024-11-29 08:03:14.013209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.113 [2024-11-29 08:03:14.013216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:24.113 [2024-11-29 08:03:14.013221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:31:24.113 [2024-11-29 08:03:14.013228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.113 [2024-11-29 08:03:14.013269] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:24.113 [2024-11-29 08:03:14.013280] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:28.323 [2024-11-29 08:03:17.554051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.554143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:28.323 [2024-11-29 08:03:17.554161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3540.766 ms 00:31:28.323 [2024-11-29 08:03:17.554173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.586491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.586560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:28.323 [2024-11-29 08:03:17.586575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.065 ms 00:31:28.323 [2024-11-29 08:03:17.586589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.586735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.586750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:28.323 [2024-11-29 08:03:17.586760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:28.323 [2024-11-29 08:03:17.586776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.622675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.622726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.323 [2024-11-29 08:03:17.622738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.862 ms 00:31:28.323 [2024-11-29 08:03:17.622748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.622788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.622800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.323 [2024-11-29 08:03:17.622809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:28.323 [2024-11-29 08:03:17.622828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.623472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.623505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.323 [2024-11-29 08:03:17.623516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:31:28.323 [2024-11-29 08:03:17.623527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.623645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.623661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.323 [2024-11-29 08:03:17.623670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:28.323 [2024-11-29 08:03:17.623683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.641469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.641519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.323 [2024-11-29 08:03:17.641531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.767 ms 00:31:28.323 [2024-11-29 08:03:17.641542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.672291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:28.323 [2024-11-29 08:03:17.676196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.676246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:28.323 [2024-11-29 08:03:17.676261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.560 ms 00:31:28.323 [2024-11-29 08:03:17.676269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.769762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.769826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:28.323 [2024-11-29 08:03:17.769846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.441 ms 00:31:28.323 [2024-11-29 08:03:17.769856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.770076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.770088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:28.323 [2024-11-29 08:03:17.770104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:31:28.323 [2024-11-29 08:03:17.770112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.797000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.797056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:28.323 [2024-11-29 08:03:17.797074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.825 ms 00:31:28.323 [2024-11-29 08:03:17.797082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.823092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.823142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:28.323 [2024-11-29 08:03:17.823159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.943 ms 00:31:28.323 [2024-11-29 08:03:17.823166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.823821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.823835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:28.323 [2024-11-29 08:03:17.823850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:31:28.323 [2024-11-29 08:03:17.823859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.906058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.906111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:28.323 [2024-11-29 08:03:17.906131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.115 ms 00:31:28.323 [2024-11-29 08:03:17.906141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.934397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.934460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:28.323 [2024-11-29 08:03:17.934477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.147 ms 00:31:28.323 [2024-11-29 08:03:17.934486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.961272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.961327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:28.323 [2024-11-29 08:03:17.961344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.725 ms 00:31:28.323 [2024-11-29 08:03:17.961352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.988628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.988677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:28.323 [2024-11-29 08:03:17.988692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.207 ms 00:31:28.323 [2024-11-29 08:03:17.988700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.988761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.988770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:28.323 [2024-11-29 08:03:17.988785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:28.323 [2024-11-29 08:03:17.988793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.988906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.323 [2024-11-29 08:03:17.988920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:28.323 [2024-11-29 08:03:17.988932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:28.323 [2024-11-29 08:03:17.988940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.323 [2024-11-29 08:03:17.990166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3987.099 ms, result 0 00:31:28.323 { 00:31:28.323 "name": "ftl0", 00:31:28.323 "uuid": "81b9f93a-4e3e-4f17-ae2b-60de48297168" 00:31:28.323 } 00:31:28.323 08:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:28.323 08:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:28.323 08:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:28.323 08:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:28.585 [2024-11-29 08:03:18.437610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.437697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:28.585 [2024-11-29 08:03:18.437715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:28.585 [2024-11-29 08:03:18.437728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.437760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:28.585 [2024-11-29 08:03:18.441082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.441127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:28.585 [2024-11-29 08:03:18.441144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:31:28.585 [2024-11-29 08:03:18.441153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.441527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.441545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:28.585 [2024-11-29 08:03:18.441559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:31:28.585 [2024-11-29 08:03:18.441568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.444844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.444891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:28.585 [2024-11-29 08:03:18.444904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:31:28.585 [2024-11-29 08:03:18.444913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.451197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.451243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:28.585 [2024-11-29 08:03:18.451258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:31:28.585 [2024-11-29 08:03:18.451267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.479369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.479421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:28.585 [2024-11-29 08:03:18.479438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.019 ms 00:31:28.585 [2024-11-29 08:03:18.479456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.498342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.585 [2024-11-29 08:03:18.498395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:28.585 [2024-11-29 08:03:18.498412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.819 ms 00:31:28.585 [2024-11-29 08:03:18.498421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.585 [2024-11-29 08:03:18.498618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.586 [2024-11-29 08:03:18.498634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:28.586 [2024-11-29 08:03:18.498647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:31:28.586 [2024-11-29 08:03:18.498657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.586 [2024-11-29 08:03:18.525525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.586 [2024-11-29 08:03:18.525577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:28.586 [2024-11-29 08:03:18.525593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.841 ms 00:31:28.586 [2024-11-29 08:03:18.525601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.848 [2024-11-29 08:03:18.551616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.848 [2024-11-29 08:03:18.551668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:28.848 [2024-11-29 08:03:18.551684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.952 ms 00:31:28.848 [2024-11-29 08:03:18.551692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.848 [2024-11-29 08:03:18.577261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.848 [2024-11-29 08:03:18.577312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:28.848 [2024-11-29 08:03:18.577329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:31:28.848 [2024-11-29 08:03:18.577337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.848 [2024-11-29 08:03:18.603212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.848 [2024-11-29 08:03:18.603264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:28.848 [2024-11-29 08:03:18.603280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.720 ms 00:31:28.848 [2024-11-29 08:03:18.603289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.848 [2024-11-29 08:03:18.603345] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:28.848 [2024-11-29 08:03:18.603362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:28.848 [2024-11-29 08:03:18.603497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.603991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:28.849 [2024-11-29 08:03:18.604137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:28.850 [2024-11-29 08:03:18.604369] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:28.850 [2024-11-29 08:03:18.604380] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:31:28.850 [2024-11-29 08:03:18.604388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:28.850 [2024-11-29 08:03:18.604401] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:28.850 [2024-11-29 08:03:18.604414] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:28.850 [2024-11-29 08:03:18.604424] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:28.850 [2024-11-29 08:03:18.604432] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:28.850 [2024-11-29 08:03:18.604456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:28.850 [2024-11-29 08:03:18.604465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:28.850 [2024-11-29 08:03:18.604474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:28.850 [2024-11-29 08:03:18.604481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:28.850 [2024-11-29 08:03:18.604492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.850 [2024-11-29 08:03:18.604499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:28.850 [2024-11-29 08:03:18.604510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:31:28.850 [2024-11-29 08:03:18.604521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.619382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.850 [2024-11-29 08:03:18.619430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:28.850 [2024-11-29 08:03:18.619455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.796 ms 00:31:28.850 [2024-11-29 08:03:18.619465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.619900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.850 [2024-11-29 08:03:18.619921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:28.850 [2024-11-29 08:03:18.619937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:31:28.850 [2024-11-29 08:03:18.619944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.670683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:28.850 [2024-11-29 08:03:18.670734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.850 [2024-11-29 08:03:18.670749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:28.850 [2024-11-29 08:03:18.670758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.670826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:28.850 [2024-11-29 08:03:18.670836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.850 [2024-11-29 08:03:18.670852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:28.850 [2024-11-29 08:03:18.670860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.670967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:28.850 [2024-11-29 08:03:18.670989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.850 [2024-11-29 08:03:18.671001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:28.850 [2024-11-29 08:03:18.671011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.671036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:28.850 [2024-11-29 08:03:18.671045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.850 [2024-11-29 08:03:18.671056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:28.850 [2024-11-29 08:03:18.671068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.850 [2024-11-29 08:03:18.763308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:28.850 [2024-11-29 08:03:18.763371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.850 [2024-11-29 08:03:18.763389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:28.850 [2024-11-29 08:03:18.763398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.838951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:29.112 [2024-11-29 08:03:18.839036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.112 [2024-11-29 08:03:18.839189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.112 [2024-11-29 08:03:18.839305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.112 [2024-11-29 08:03:18.839496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:29.112 [2024-11-29 08:03:18.839572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.112 [2024-11-29 08:03:18.839664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.112 [2024-11-29 08:03:18.839751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.112 [2024-11-29 08:03:18.839763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.112 [2024-11-29 08:03:18.839771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.112 [2024-11-29 08:03:18.839954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.300 ms, result 0 00:31:29.112 true 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83962 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83962 ']' 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83962 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83962 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.112 killing process with pid 83962 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83962' 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83962 00:31:29.112 08:03:18 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83962 00:31:35.747 08:03:24 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:39.031 262144+0 records in 00:31:39.031 262144+0 records out 00:31:39.031 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.05457 s, 265 MB/s 00:31:39.031 08:03:28 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:40.931 08:03:30 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:40.932 [2024-11-29 08:03:30.440849] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:40.932 [2024-11-29 08:03:30.440945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84187 ] 00:31:40.932 [2024-11-29 08:03:30.595377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.932 [2024-11-29 08:03:30.695296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.191 [2024-11-29 08:03:30.989507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:41.191 [2024-11-29 08:03:30.989584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:41.456 [2024-11-29 08:03:31.150425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.150508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:41.456 [2024-11-29 08:03:31.150524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:41.456 [2024-11-29 08:03:31.150533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.150589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.150603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:41.456 [2024-11-29 08:03:31.150612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:41.456 [2024-11-29 08:03:31.150620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.150642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:41.456 [2024-11-29 08:03:31.151341] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:41.456 [2024-11-29 08:03:31.151370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.151378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:41.456 [2024-11-29 08:03:31.151387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:31:41.456 [2024-11-29 08:03:31.151395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.153093] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:41.456 [2024-11-29 08:03:31.167538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.167593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:41.456 [2024-11-29 08:03:31.167607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.447 ms 00:31:41.456 [2024-11-29 08:03:31.167615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.167701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.167713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:41.456 [2024-11-29 08:03:31.167723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:41.456 [2024-11-29 08:03:31.167730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.176225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.176271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:41.456 [2024-11-29 08:03:31.176283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.415 ms 00:31:41.456 [2024-11-29 08:03:31.176296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.176382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.176392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:41.456 [2024-11-29 08:03:31.176402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:41.456 [2024-11-29 08:03:31.176410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.176474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.176485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:41.456 [2024-11-29 08:03:31.176495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:41.456 [2024-11-29 08:03:31.176503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.176532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:41.456 [2024-11-29 08:03:31.180813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.180853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:41.456 [2024-11-29 08:03:31.180867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:31:41.456 [2024-11-29 08:03:31.180875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.180909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.180918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:41.456 [2024-11-29 08:03:31.180928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:41.456 [2024-11-29 08:03:31.180935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.180988] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:41.456 [2024-11-29 08:03:31.181012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:41.456 [2024-11-29 08:03:31.181050] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:41.456 [2024-11-29 08:03:31.181070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:41.456 [2024-11-29 08:03:31.181176] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:41.456 [2024-11-29 08:03:31.181187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:41.456 [2024-11-29 08:03:31.181198] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:41.456 [2024-11-29 08:03:31.181208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181218] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181227] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:41.456 [2024-11-29 08:03:31.181235] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:41.456 [2024-11-29 08:03:31.181246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:41.456 [2024-11-29 08:03:31.181254] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:41.456 [2024-11-29 08:03:31.181262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.181270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:41.456 [2024-11-29 08:03:31.181277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:31:41.456 [2024-11-29 08:03:31.181286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.181371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.456 [2024-11-29 08:03:31.181382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:41.456 [2024-11-29 08:03:31.181390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:41.456 [2024-11-29 08:03:31.181412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.456 [2024-11-29 08:03:31.181537] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:41.456 [2024-11-29 08:03:31.181558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:41.456 [2024-11-29 08:03:31.181568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:41.456 [2024-11-29 08:03:31.181592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:41.456 [2024-11-29 08:03:31.181615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:41.456 [2024-11-29 08:03:31.181629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:41.456 [2024-11-29 08:03:31.181636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:41.456 [2024-11-29 08:03:31.181644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:41.456 [2024-11-29 08:03:31.181659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:41.456 [2024-11-29 08:03:31.181667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:41.456 [2024-11-29 08:03:31.181674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:41.456 [2024-11-29 08:03:31.181688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:41.456 [2024-11-29 08:03:31.181712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:41.456 [2024-11-29 08:03:31.181732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:41.456 [2024-11-29 08:03:31.181753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:41.456 [2024-11-29 08:03:31.181774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.456 [2024-11-29 08:03:31.181788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:41.456 [2024-11-29 08:03:31.181794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:41.456 [2024-11-29 08:03:31.181801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:41.456 [2024-11-29 08:03:31.181808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:41.457 [2024-11-29 08:03:31.181815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:41.457 [2024-11-29 08:03:31.181822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:41.457 [2024-11-29 08:03:31.181829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:41.457 [2024-11-29 08:03:31.181835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:41.457 [2024-11-29 08:03:31.181842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.457 [2024-11-29 08:03:31.181849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:41.457 [2024-11-29 08:03:31.181856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:41.457 [2024-11-29 08:03:31.181862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.457 [2024-11-29 08:03:31.181869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:41.457 [2024-11-29 08:03:31.181878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:41.457 [2024-11-29 08:03:31.181886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:41.457 [2024-11-29 08:03:31.181895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.457 [2024-11-29 08:03:31.181903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:41.457 [2024-11-29 08:03:31.181910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:41.457 [2024-11-29 08:03:31.181917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:41.457 [2024-11-29 08:03:31.181924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:41.457 [2024-11-29 08:03:31.181931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:41.457 [2024-11-29 08:03:31.181937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:41.457 [2024-11-29 08:03:31.181947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:41.457 [2024-11-29 08:03:31.181957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.181967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:41.457 [2024-11-29 08:03:31.181975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:41.457 [2024-11-29 08:03:31.181982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:41.457 [2024-11-29 08:03:31.181989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:41.457 [2024-11-29 08:03:31.181997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:41.457 [2024-11-29 08:03:31.182004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:41.457 [2024-11-29 08:03:31.182011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:41.457 [2024-11-29 08:03:31.182018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:41.457 [2024-11-29 08:03:31.182025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:41.457 [2024-11-29 08:03:31.182032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:41.457 [2024-11-29 08:03:31.182068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:41.457 [2024-11-29 08:03:31.182076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:41.457 [2024-11-29 08:03:31.182091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:41.457 [2024-11-29 08:03:31.182098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:41.457 [2024-11-29 08:03:31.182106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:41.457 [2024-11-29 08:03:31.182114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.182122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:41.457 [2024-11-29 08:03:31.182131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:31:41.457 [2024-11-29 08:03:31.182140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.214673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.214724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:41.457 [2024-11-29 08:03:31.214736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.488 ms 00:31:41.457 [2024-11-29 08:03:31.214749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.214841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.214851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:41.457 [2024-11-29 08:03:31.214860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:41.457 [2024-11-29 08:03:31.214869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.257934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.257990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:41.457 [2024-11-29 08:03:31.258004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.998 ms 00:31:41.457 [2024-11-29 08:03:31.258013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.258066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.258076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:41.457 [2024-11-29 08:03:31.258090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:41.457 [2024-11-29 08:03:31.258098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.258748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.258785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:41.457 [2024-11-29 08:03:31.258795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:31:41.457 [2024-11-29 08:03:31.258803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.258962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.258973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:41.457 [2024-11-29 08:03:31.258989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:31:41.457 [2024-11-29 08:03:31.258998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.274936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.274984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:41.457 [2024-11-29 08:03:31.274995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.918 ms 00:31:41.457 [2024-11-29 08:03:31.275003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.289352] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:41.457 [2024-11-29 08:03:31.289413] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:41.457 [2024-11-29 08:03:31.289429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.289438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:41.457 [2024-11-29 08:03:31.289458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:31:41.457 [2024-11-29 08:03:31.289465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.315609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.315665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:41.457 [2024-11-29 08:03:31.315677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.088 ms 00:31:41.457 [2024-11-29 08:03:31.315686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.328664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.328709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:41.457 [2024-11-29 08:03:31.328721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.922 ms 00:31:41.457 [2024-11-29 08:03:31.328729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.341299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.341346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:41.457 [2024-11-29 08:03:31.341358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.522 ms 00:31:41.457 [2024-11-29 08:03:31.341365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.457 [2024-11-29 08:03:31.342062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.457 [2024-11-29 08:03:31.342098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:41.457 [2024-11-29 08:03:31.342109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:31:41.457 [2024-11-29 08:03:31.342121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.407303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.407364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:41.717 [2024-11-29 08:03:31.407381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.161 ms 00:31:41.717 [2024-11-29 08:03:31.407397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.418547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:41.717 [2024-11-29 08:03:31.421813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.421855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:41.717 [2024-11-29 08:03:31.421867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.336 ms 00:31:41.717 [2024-11-29 08:03:31.421876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.421971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.421982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:41.717 [2024-11-29 08:03:31.421992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:41.717 [2024-11-29 08:03:31.422001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.422077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.422090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:41.717 [2024-11-29 08:03:31.422098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:41.717 [2024-11-29 08:03:31.422106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.422127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.422136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:41.717 [2024-11-29 08:03:31.422145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:41.717 [2024-11-29 08:03:31.422153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.422189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:41.717 [2024-11-29 08:03:31.422202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.422210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:41.717 [2024-11-29 08:03:31.422219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:41.717 [2024-11-29 08:03:31.422227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.448189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.448239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:41.717 [2024-11-29 08:03:31.448253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.942 ms 00:31:41.717 [2024-11-29 08:03:31.448268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.448361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.717 [2024-11-29 08:03:31.448372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:41.717 [2024-11-29 08:03:31.448381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:41.717 [2024-11-29 08:03:31.448390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.717 [2024-11-29 08:03:31.450499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.550 ms, result 0 00:31:42.658  [2024-11-29T08:03:33.546Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-29T08:03:34.489Z] Copying: 32/1024 [MB] (12 MBps) [2024-11-29T08:03:35.866Z] Copying: 50/1024 [MB] (17 MBps) [2024-11-29T08:03:36.517Z] Copying: 98/1024 [MB] (48 MBps) [2024-11-29T08:03:37.904Z] Copying: 115/1024 [MB] (16 MBps) [2024-11-29T08:03:38.475Z] Copying: 131/1024 [MB] (15 MBps) [2024-11-29T08:03:39.860Z] Copying: 148/1024 [MB] (17 MBps) [2024-11-29T08:03:40.800Z] Copying: 169/1024 [MB] (20 MBps) [2024-11-29T08:03:41.742Z] Copying: 191/1024 [MB] (22 MBps) [2024-11-29T08:03:42.685Z] Copying: 212/1024 [MB] (20 MBps) [2024-11-29T08:03:43.630Z] Copying: 231/1024 [MB] (18 MBps) [2024-11-29T08:03:44.573Z] Copying: 245/1024 [MB] (14 MBps) [2024-11-29T08:03:45.519Z] Copying: 260/1024 [MB] (14 MBps) [2024-11-29T08:03:46.904Z] Copying: 277/1024 [MB] (16 MBps) [2024-11-29T08:03:47.478Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-29T08:03:48.869Z] Copying: 306/1024 [MB] (12 MBps) [2024-11-29T08:03:49.814Z] Copying: 322/1024 [MB] (15 MBps) [2024-11-29T08:03:50.756Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-29T08:03:51.703Z] Copying: 354/1024 [MB] (16 MBps) [2024-11-29T08:03:52.674Z] Copying: 367/1024 [MB] (12 MBps) [2024-11-29T08:03:53.619Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-29T08:03:54.563Z] Copying: 392/1024 [MB] (14 MBps) [2024-11-29T08:03:55.510Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-29T08:03:56.888Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-29T08:03:57.831Z] Copying: 441/1024 [MB] (28 MBps) [2024-11-29T08:03:58.777Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-29T08:03:59.720Z] Copying: 472776/1048576 [kB] (10180 kBps) [2024-11-29T08:04:00.665Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-29T08:04:01.609Z] Copying: 490/1024 [MB] (18 MBps) [2024-11-29T08:04:02.556Z] Copying: 506/1024 [MB] (16 MBps) [2024-11-29T08:04:03.502Z] Copying: 523/1024 [MB] (16 MBps) [2024-11-29T08:04:04.892Z] Copying: 537/1024 [MB] (14 MBps) [2024-11-29T08:04:05.465Z] Copying: 557/1024 [MB] (19 MBps) [2024-11-29T08:04:06.464Z] Copying: 578/1024 [MB] (21 MBps) [2024-11-29T08:04:07.853Z] Copying: 594/1024 [MB] (15 MBps) [2024-11-29T08:04:08.797Z] Copying: 610/1024 [MB] (16 MBps) [2024-11-29T08:04:09.740Z] Copying: 627/1024 [MB] (17 MBps) [2024-11-29T08:04:10.684Z] Copying: 644/1024 [MB] (16 MBps) [2024-11-29T08:04:11.627Z] Copying: 660/1024 [MB] (15 MBps) [2024-11-29T08:04:12.581Z] Copying: 675/1024 [MB] (15 MBps) [2024-11-29T08:04:13.525Z] Copying: 695/1024 [MB] (19 MBps) [2024-11-29T08:04:14.468Z] Copying: 715/1024 [MB] (19 MBps) [2024-11-29T08:04:15.855Z] Copying: 734/1024 [MB] (18 MBps) [2024-11-29T08:04:16.800Z] Copying: 750/1024 [MB] (16 MBps) [2024-11-29T08:04:17.746Z] Copying: 765/1024 [MB] (14 MBps) [2024-11-29T08:04:18.710Z] Copying: 783/1024 [MB] (18 MBps) [2024-11-29T08:04:19.652Z] Copying: 803/1024 [MB] (19 MBps) [2024-11-29T08:04:20.593Z] Copying: 821/1024 [MB] (17 MBps) [2024-11-29T08:04:21.536Z] Copying: 837/1024 [MB] (15 MBps) [2024-11-29T08:04:22.478Z] Copying: 852/1024 [MB] (15 MBps) [2024-11-29T08:04:23.851Z] Copying: 871/1024 [MB] (18 MBps) [2024-11-29T08:04:24.784Z] Copying: 921/1024 [MB] (49 MBps) [2024-11-29T08:04:25.719Z] Copying: 972/1024 [MB] (51 MBps) [2024-11-29T08:04:26.288Z] Copying: 1004/1024 [MB] (32 MBps) [2024-11-29T08:04:26.288Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 08:04:26.145650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.344 [2024-11-29 08:04:26.145691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:36.344 [2024-11-29 08:04:26.145704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:36.344 [2024-11-29 08:04:26.145712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.344 [2024-11-29 08:04:26.145732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:36.344 [2024-11-29 08:04:26.148351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.344 [2024-11-29 08:04:26.148380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:36.344 [2024-11-29 08:04:26.148396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:32:36.344 [2024-11-29 08:04:26.148406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.344 [2024-11-29 08:04:26.151088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.344 [2024-11-29 08:04:26.151118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:36.344 [2024-11-29 08:04:26.151127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:32:36.344 [2024-11-29 08:04:26.151134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.344 [2024-11-29 08:04:26.151158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.344 [2024-11-29 08:04:26.151167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:36.344 [2024-11-29 08:04:26.151175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:36.344 [2024-11-29 08:04:26.151183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.344 [2024-11-29 08:04:26.151228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.344 [2024-11-29 08:04:26.151236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:36.344 [2024-11-29 08:04:26.151244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:36.344 [2024-11-29 08:04:26.151252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.344 [2024-11-29 08:04:26.151265] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:36.344 [2024-11-29 08:04:26.151277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:36.344 [2024-11-29 08:04:26.151751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.151993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:36.345 [2024-11-29 08:04:26.152151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:36.345 [2024-11-29 08:04:26.152159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:32:36.345 [2024-11-29 08:04:26.152167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:36.345 [2024-11-29 08:04:26.152173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:36.345 [2024-11-29 08:04:26.152180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:36.345 [2024-11-29 08:04:26.152189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:36.345 [2024-11-29 08:04:26.152196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:36.345 [2024-11-29 08:04:26.152203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:36.345 [2024-11-29 08:04:26.152210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:36.345 [2024-11-29 08:04:26.152216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:36.345 [2024-11-29 08:04:26.152222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:36.345 [2024-11-29 08:04:26.152229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.345 [2024-11-29 08:04:26.152236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:36.345 [2024-11-29 08:04:26.152243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:32:36.345 [2024-11-29 08:04:26.152250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.345 [2024-11-29 08:04:26.164797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.345 [2024-11-29 08:04:26.164833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:36.345 [2024-11-29 08:04:26.164843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.532 ms 00:32:36.345 [2024-11-29 08:04:26.164850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.165187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.346 [2024-11-29 08:04:26.165203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:36.346 [2024-11-29 08:04:26.165212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:32:36.346 [2024-11-29 08:04:26.165219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.198037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.346 [2024-11-29 08:04:26.198070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:36.346 [2024-11-29 08:04:26.198081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.346 [2024-11-29 08:04:26.198087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.198137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.346 [2024-11-29 08:04:26.198145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:36.346 [2024-11-29 08:04:26.198152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.346 [2024-11-29 08:04:26.198159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.198202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.346 [2024-11-29 08:04:26.198215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:36.346 [2024-11-29 08:04:26.198223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.346 [2024-11-29 08:04:26.198229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.198243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.346 [2024-11-29 08:04:26.198251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:36.346 [2024-11-29 08:04:26.198261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.346 [2024-11-29 08:04:26.198267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.346 [2024-11-29 08:04:26.276152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.346 [2024-11-29 08:04:26.276196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:36.346 [2024-11-29 08:04:26.276206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.346 [2024-11-29 08:04:26.276213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:36.606 [2024-11-29 08:04:26.341259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:36.606 [2024-11-29 08:04:26.341358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:36.606 [2024-11-29 08:04:26.341418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:36.606 [2024-11-29 08:04:26.341567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:36.606 [2024-11-29 08:04:26.341620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:36.606 [2024-11-29 08:04:26.341679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.606 [2024-11-29 08:04:26.341742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:36.606 [2024-11-29 08:04:26.341751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.606 [2024-11-29 08:04:26.341758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.606 [2024-11-29 08:04:26.341879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 196.193 ms, result 0 00:32:37.551 00:32:37.551 00:32:37.551 08:04:27 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:37.551 [2024-11-29 08:04:27.469368] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:37.551 [2024-11-29 08:04:27.469556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84769 ] 00:32:37.813 [2024-11-29 08:04:27.632960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.813 [2024-11-29 08:04:27.753641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.387 [2024-11-29 08:04:28.047244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:38.387 [2024-11-29 08:04:28.047337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:38.387 [2024-11-29 08:04:28.208054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.208122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:38.387 [2024-11-29 08:04:28.208137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:38.387 [2024-11-29 08:04:28.208146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.208200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.208213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.387 [2024-11-29 08:04:28.208222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:38.387 [2024-11-29 08:04:28.208230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.208250] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:38.387 [2024-11-29 08:04:28.209068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:38.387 [2024-11-29 08:04:28.209111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.209120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.387 [2024-11-29 08:04:28.209131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:32:38.387 [2024-11-29 08:04:28.209139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.209584] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:38.387 [2024-11-29 08:04:28.209664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.209677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:38.387 [2024-11-29 08:04:28.209688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:38.387 [2024-11-29 08:04:28.209696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.209749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.209759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:38.387 [2024-11-29 08:04:28.209767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:38.387 [2024-11-29 08:04:28.209774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.210053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.210071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.387 [2024-11-29 08:04:28.210081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:32:38.387 [2024-11-29 08:04:28.210089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.210159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.210168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.387 [2024-11-29 08:04:28.210177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:38.387 [2024-11-29 08:04:28.210185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.210208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.210216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:38.387 [2024-11-29 08:04:28.210227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:38.387 [2024-11-29 08:04:28.210234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.210253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:38.387 [2024-11-29 08:04:28.214509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.214551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.387 [2024-11-29 08:04:28.214561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:32:38.387 [2024-11-29 08:04:28.214568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.214607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.214615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:38.387 [2024-11-29 08:04:28.214623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:38.387 [2024-11-29 08:04:28.214630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.214683] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:38.387 [2024-11-29 08:04:28.214707] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:38.387 [2024-11-29 08:04:28.214747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:38.387 [2024-11-29 08:04:28.214763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:38.387 [2024-11-29 08:04:28.214867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:38.387 [2024-11-29 08:04:28.214878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:38.387 [2024-11-29 08:04:28.214889] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:38.387 [2024-11-29 08:04:28.214899] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:38.387 [2024-11-29 08:04:28.214908] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:38.387 [2024-11-29 08:04:28.214918] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:38.387 [2024-11-29 08:04:28.214927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:38.387 [2024-11-29 08:04:28.214934] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:38.387 [2024-11-29 08:04:28.214942] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:38.387 [2024-11-29 08:04:28.214950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.214957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:38.387 [2024-11-29 08:04:28.214965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:32:38.387 [2024-11-29 08:04:28.214973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.387 [2024-11-29 08:04:28.215056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.387 [2024-11-29 08:04:28.215065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:38.387 [2024-11-29 08:04:28.215072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:38.388 [2024-11-29 08:04:28.215082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.215182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:38.388 [2024-11-29 08:04:28.215195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:38.388 [2024-11-29 08:04:28.215204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:38.388 [2024-11-29 08:04:28.215228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:38.388 [2024-11-29 08:04:28.215250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.388 [2024-11-29 08:04:28.215264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:38.388 [2024-11-29 08:04:28.215273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:38.388 [2024-11-29 08:04:28.215280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.388 [2024-11-29 08:04:28.215288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:38.388 [2024-11-29 08:04:28.215295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:38.388 [2024-11-29 08:04:28.215308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:38.388 [2024-11-29 08:04:28.215323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:38.388 [2024-11-29 08:04:28.215343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:38.388 [2024-11-29 08:04:28.215363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:38.388 [2024-11-29 08:04:28.215383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:38.388 [2024-11-29 08:04:28.215402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:38.388 [2024-11-29 08:04:28.215422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.388 [2024-11-29 08:04:28.215434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:38.388 [2024-11-29 08:04:28.215457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:38.388 [2024-11-29 08:04:28.215464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.388 [2024-11-29 08:04:28.215472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:38.388 [2024-11-29 08:04:28.215479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:38.388 [2024-11-29 08:04:28.215485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:38.388 [2024-11-29 08:04:28.215498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:38.388 [2024-11-29 08:04:28.215505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215513] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:38.388 [2024-11-29 08:04:28.215526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:38.388 [2024-11-29 08:04:28.215533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.388 [2024-11-29 08:04:28.215551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:38.388 [2024-11-29 08:04:28.215559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:38.388 [2024-11-29 08:04:28.215565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:38.388 [2024-11-29 08:04:28.215572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:38.388 [2024-11-29 08:04:28.215579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:38.388 [2024-11-29 08:04:28.215586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:38.388 [2024-11-29 08:04:28.215595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:38.388 [2024-11-29 08:04:28.215604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:38.388 [2024-11-29 08:04:28.215621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:38.388 [2024-11-29 08:04:28.215629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:38.388 [2024-11-29 08:04:28.215636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:38.388 [2024-11-29 08:04:28.215643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:38.388 [2024-11-29 08:04:28.215650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:38.388 [2024-11-29 08:04:28.215657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:38.388 [2024-11-29 08:04:28.215664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:38.388 [2024-11-29 08:04:28.215671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:38.388 [2024-11-29 08:04:28.215678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:38.388 [2024-11-29 08:04:28.215715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:38.388 [2024-11-29 08:04:28.215723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:38.388 [2024-11-29 08:04:28.215738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:38.388 [2024-11-29 08:04:28.215745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:38.388 [2024-11-29 08:04:28.215753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:38.388 [2024-11-29 08:04:28.215766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.215774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:38.388 [2024-11-29 08:04:28.215782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:32:38.388 [2024-11-29 08:04:28.215788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.243230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.243279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:38.388 [2024-11-29 08:04:28.243290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.400 ms 00:32:38.388 [2024-11-29 08:04:28.243298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.243384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.243392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:38.388 [2024-11-29 08:04:28.243404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:38.388 [2024-11-29 08:04:28.243412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.294544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.294601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:38.388 [2024-11-29 08:04:28.294615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.063 ms 00:32:38.388 [2024-11-29 08:04:28.294624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.294673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.294684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.388 [2024-11-29 08:04:28.294693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:38.388 [2024-11-29 08:04:28.294701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.388 [2024-11-29 08:04:28.294811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.388 [2024-11-29 08:04:28.294823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.388 [2024-11-29 08:04:28.294832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:38.388 [2024-11-29 08:04:28.294840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.294968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.294988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.389 [2024-11-29 08:04:28.294997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:32:38.389 [2024-11-29 08:04:28.295005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.310606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.310654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.389 [2024-11-29 08:04:28.310665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.581 ms 00:32:38.389 [2024-11-29 08:04:28.310672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.310821] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:38.389 [2024-11-29 08:04:28.310835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:38.389 [2024-11-29 08:04:28.310848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.310856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:38.389 [2024-11-29 08:04:28.310865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:38.389 [2024-11-29 08:04:28.310872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.323158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.323204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:38.389 [2024-11-29 08:04:28.323215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.271 ms 00:32:38.389 [2024-11-29 08:04:28.323223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.323352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.323362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:38.389 [2024-11-29 08:04:28.323371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:38.389 [2024-11-29 08:04:28.323383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.323433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.323466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:38.389 [2024-11-29 08:04:28.323484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:38.389 [2024-11-29 08:04:28.323492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.324075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.324103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:38.389 [2024-11-29 08:04:28.324112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:32:38.389 [2024-11-29 08:04:28.324119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.389 [2024-11-29 08:04:28.324140] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:38.389 [2024-11-29 08:04:28.324150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.389 [2024-11-29 08:04:28.324159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:38.389 [2024-11-29 08:04:28.324168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:38.389 [2024-11-29 08:04:28.324176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.650 [2024-11-29 08:04:28.336770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:38.650 [2024-11-29 08:04:28.336934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.650 [2024-11-29 08:04:28.336945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:38.650 [2024-11-29 08:04:28.336956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.741 ms 00:32:38.650 [2024-11-29 08:04:28.336964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.650 [2024-11-29 08:04:28.339144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.339176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:38.651 [2024-11-29 08:04:28.339186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:32:38.651 [2024-11-29 08:04:28.339193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.339282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.339292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:38.651 [2024-11-29 08:04:28.339301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:38.651 [2024-11-29 08:04:28.339309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.339333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.339347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:38.651 [2024-11-29 08:04:28.339355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:38.651 [2024-11-29 08:04:28.339363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.339393] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:38.651 [2024-11-29 08:04:28.339404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.339411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:38.651 [2024-11-29 08:04:28.339419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:38.651 [2024-11-29 08:04:28.339426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.366057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.366109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:38.651 [2024-11-29 08:04:28.366122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.595 ms 00:32:38.651 [2024-11-29 08:04:28.366130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.366217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.651 [2024-11-29 08:04:28.366227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:38.651 [2024-11-29 08:04:28.366237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:38.651 [2024-11-29 08:04:28.366246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.651 [2024-11-29 08:04:28.367397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.887 ms, result 0 00:32:40.040  [2024-11-29T08:04:30.556Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T08:04:31.945Z] Copying: 26/1024 [MB] (14 MBps) [2024-11-29T08:04:32.891Z] Copying: 40/1024 [MB] (13 MBps) [2024-11-29T08:04:33.837Z] Copying: 52/1024 [MB] (12 MBps) [2024-11-29T08:04:34.855Z] Copying: 68/1024 [MB] (16 MBps) [2024-11-29T08:04:35.798Z] Copying: 80/1024 [MB] (12 MBps) [2024-11-29T08:04:36.744Z] Copying: 91/1024 [MB] (11 MBps) [2024-11-29T08:04:37.689Z] Copying: 105/1024 [MB] (13 MBps) [2024-11-29T08:04:38.631Z] Copying: 120/1024 [MB] (15 MBps) [2024-11-29T08:04:39.577Z] Copying: 141/1024 [MB] (21 MBps) [2024-11-29T08:04:40.963Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-29T08:04:41.906Z] Copying: 165/1024 [MB] (13 MBps) [2024-11-29T08:04:42.850Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-29T08:04:43.793Z] Copying: 194/1024 [MB] (13 MBps) [2024-11-29T08:04:44.736Z] Copying: 207/1024 [MB] (12 MBps) [2024-11-29T08:04:45.679Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-29T08:04:46.622Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-29T08:04:47.566Z] Copying: 250/1024 [MB] (14 MBps) [2024-11-29T08:04:48.956Z] Copying: 263/1024 [MB] (12 MBps) [2024-11-29T08:04:49.901Z] Copying: 276/1024 [MB] (13 MBps) [2024-11-29T08:04:50.847Z] Copying: 293/1024 [MB] (17 MBps) [2024-11-29T08:04:51.792Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-29T08:04:52.736Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-29T08:04:53.676Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-29T08:04:54.617Z] Copying: 346/1024 [MB] (19 MBps) [2024-11-29T08:04:55.559Z] Copying: 361/1024 [MB] (15 MBps) [2024-11-29T08:04:56.944Z] Copying: 381/1024 [MB] (19 MBps) [2024-11-29T08:04:57.885Z] Copying: 397/1024 [MB] (16 MBps) [2024-11-29T08:04:58.826Z] Copying: 416/1024 [MB] (18 MBps) [2024-11-29T08:04:59.765Z] Copying: 427/1024 [MB] (10 MBps) [2024-11-29T08:05:00.734Z] Copying: 448/1024 [MB] (21 MBps) [2024-11-29T08:05:01.677Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-29T08:05:02.617Z] Copying: 484/1024 [MB] (15 MBps) [2024-11-29T08:05:03.612Z] Copying: 505/1024 [MB] (20 MBps) [2024-11-29T08:05:04.997Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-29T08:05:05.566Z] Copying: 539/1024 [MB] (19 MBps) [2024-11-29T08:05:06.952Z] Copying: 556/1024 [MB] (17 MBps) [2024-11-29T08:05:07.892Z] Copying: 574/1024 [MB] (17 MBps) [2024-11-29T08:05:08.836Z] Copying: 593/1024 [MB] (19 MBps) [2024-11-29T08:05:09.780Z] Copying: 617/1024 [MB] (23 MBps) [2024-11-29T08:05:10.727Z] Copying: 639/1024 [MB] (22 MBps) [2024-11-29T08:05:11.667Z] Copying: 659/1024 [MB] (20 MBps) [2024-11-29T08:05:12.612Z] Copying: 681/1024 [MB] (21 MBps) [2024-11-29T08:05:14.000Z] Copying: 694/1024 [MB] (13 MBps) [2024-11-29T08:05:14.573Z] Copying: 721/1024 [MB] (26 MBps) [2024-11-29T08:05:15.955Z] Copying: 736/1024 [MB] (14 MBps) [2024-11-29T08:05:16.898Z] Copying: 758/1024 [MB] (22 MBps) [2024-11-29T08:05:17.840Z] Copying: 777/1024 [MB] (19 MBps) [2024-11-29T08:05:18.784Z] Copying: 802/1024 [MB] (24 MBps) [2024-11-29T08:05:19.725Z] Copying: 826/1024 [MB] (24 MBps) [2024-11-29T08:05:20.668Z] Copying: 843/1024 [MB] (16 MBps) [2024-11-29T08:05:21.611Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-29T08:05:22.997Z] Copying: 880/1024 [MB] (20 MBps) [2024-11-29T08:05:23.569Z] Copying: 899/1024 [MB] (18 MBps) [2024-11-29T08:05:24.956Z] Copying: 918/1024 [MB] (19 MBps) [2024-11-29T08:05:25.897Z] Copying: 938/1024 [MB] (20 MBps) [2024-11-29T08:05:26.843Z] Copying: 954/1024 [MB] (15 MBps) [2024-11-29T08:05:27.788Z] Copying: 975/1024 [MB] (21 MBps) [2024-11-29T08:05:28.733Z] Copying: 989/1024 [MB] (13 MBps) [2024-11-29T08:05:29.303Z] Copying: 1013/1024 [MB] (24 MBps) [2024-11-29T08:05:29.566Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 08:05:29.432317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.622 [2024-11-29 08:05:29.432418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:39.622 [2024-11-29 08:05:29.432473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:39.622 [2024-11-29 08:05:29.432487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.622 [2024-11-29 08:05:29.432536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:39.622 [2024-11-29 08:05:29.435944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.622 [2024-11-29 08:05:29.435984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:39.622 [2024-11-29 08:05:29.435996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:33:39.622 [2024-11-29 08:05:29.436006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.622 [2024-11-29 08:05:29.436256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.622 [2024-11-29 08:05:29.436267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:39.622 [2024-11-29 08:05:29.436277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:33:39.622 [2024-11-29 08:05:29.436285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.622 [2024-11-29 08:05:29.436319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.622 [2024-11-29 08:05:29.436328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:39.622 [2024-11-29 08:05:29.436337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:39.622 [2024-11-29 08:05:29.436345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.622 [2024-11-29 08:05:29.436407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.622 [2024-11-29 08:05:29.436416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:39.622 [2024-11-29 08:05:29.436425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:39.622 [2024-11-29 08:05:29.436434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.622 [2024-11-29 08:05:29.436461] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:39.622 [2024-11-29 08:05:29.436476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:39.622 [2024-11-29 08:05:29.436871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.436996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:39.623 [2024-11-29 08:05:29.437283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:39.623 [2024-11-29 08:05:29.437302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:33:39.623 [2024-11-29 08:05:29.437310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:39.623 [2024-11-29 08:05:29.437317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:39.623 [2024-11-29 08:05:29.437324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:39.623 [2024-11-29 08:05:29.437332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:39.623 [2024-11-29 08:05:29.437340] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:39.623 [2024-11-29 08:05:29.437359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:39.623 [2024-11-29 08:05:29.438113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:39.623 [2024-11-29 08:05:29.438129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:39.623 [2024-11-29 08:05:29.438137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:39.623 [2024-11-29 08:05:29.438146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.623 [2024-11-29 08:05:29.438156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:39.623 [2024-11-29 08:05:29.438166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:33:39.623 [2024-11-29 08:05:29.438180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.623 [2024-11-29 08:05:29.452794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.623 [2024-11-29 08:05:29.452835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:39.623 [2024-11-29 08:05:29.452848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.593 ms 00:33:39.623 [2024-11-29 08:05:29.452857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.624 [2024-11-29 08:05:29.453253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.624 [2024-11-29 08:05:29.453273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:39.624 [2024-11-29 08:05:29.453291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:33:39.624 [2024-11-29 08:05:29.453298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.624 [2024-11-29 08:05:29.490126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.624 [2024-11-29 08:05:29.490169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:39.624 [2024-11-29 08:05:29.490182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.624 [2024-11-29 08:05:29.490192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.624 [2024-11-29 08:05:29.490267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.624 [2024-11-29 08:05:29.490277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:39.624 [2024-11-29 08:05:29.490291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.624 [2024-11-29 08:05:29.490300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.624 [2024-11-29 08:05:29.490361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.624 [2024-11-29 08:05:29.490373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:39.624 [2024-11-29 08:05:29.490382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.624 [2024-11-29 08:05:29.490392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.624 [2024-11-29 08:05:29.490410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.624 [2024-11-29 08:05:29.490419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:39.624 [2024-11-29 08:05:29.490428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.624 [2024-11-29 08:05:29.490440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.576317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.576380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:39.886 [2024-11-29 08:05:29.576394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.576403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:39.886 [2024-11-29 08:05:29.646171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:39.886 [2024-11-29 08:05:29.646285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:39.886 [2024-11-29 08:05:29.646357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:39.886 [2024-11-29 08:05:29.646503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:39.886 [2024-11-29 08:05:29.646555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:39.886 [2024-11-29 08:05:29.646622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.886 [2024-11-29 08:05:29.646688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:39.886 [2024-11-29 08:05:29.646696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.886 [2024-11-29 08:05:29.646703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.886 [2024-11-29 08:05:29.646842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 214.497 ms, result 0 00:33:40.460 00:33:40.460 00:33:40.721 08:05:30 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:43.335 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:43.335 08:05:32 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:43.335 [2024-11-29 08:05:32.748516] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:43.335 [2024-11-29 08:05:32.748690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85417 ] 00:33:43.335 [2024-11-29 08:05:32.917082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.335 [2024-11-29 08:05:33.037920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.598 [2024-11-29 08:05:33.334937] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:43.598 [2024-11-29 08:05:33.335030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:43.598 [2024-11-29 08:05:33.497223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.497289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:43.598 [2024-11-29 08:05:33.497304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:43.598 [2024-11-29 08:05:33.497313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.497370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.497384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:43.598 [2024-11-29 08:05:33.497394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:43.598 [2024-11-29 08:05:33.497402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.497423] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:43.598 [2024-11-29 08:05:33.498239] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:43.598 [2024-11-29 08:05:33.498269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.498278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:43.598 [2024-11-29 08:05:33.498287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:33:43.598 [2024-11-29 08:05:33.498296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.498964] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:43.598 [2024-11-29 08:05:33.499040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.499056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:43.598 [2024-11-29 08:05:33.499068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:33:43.598 [2024-11-29 08:05:33.499076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.499194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.499208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:43.598 [2024-11-29 08:05:33.499218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:43.598 [2024-11-29 08:05:33.499225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.499565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.499587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:43.598 [2024-11-29 08:05:33.499597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:33:43.598 [2024-11-29 08:05:33.499605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.499682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.499693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:43.598 [2024-11-29 08:05:33.499702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:43.598 [2024-11-29 08:05:33.499710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.499734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.499742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:43.598 [2024-11-29 08:05:33.499754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:43.598 [2024-11-29 08:05:33.499761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.499786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:43.598 [2024-11-29 08:05:33.504013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.504051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:43.598 [2024-11-29 08:05:33.504061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:33:43.598 [2024-11-29 08:05:33.504069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.504109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.504117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:43.598 [2024-11-29 08:05:33.504126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:43.598 [2024-11-29 08:05:33.504133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.504192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:43.598 [2024-11-29 08:05:33.504216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:43.598 [2024-11-29 08:05:33.504255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:43.598 [2024-11-29 08:05:33.504273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:43.598 [2024-11-29 08:05:33.504378] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:43.598 [2024-11-29 08:05:33.504389] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:43.598 [2024-11-29 08:05:33.504401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:43.598 [2024-11-29 08:05:33.504413] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504434] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:43.598 [2024-11-29 08:05:33.504458] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:43.598 [2024-11-29 08:05:33.504466] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:43.598 [2024-11-29 08:05:33.504474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:43.598 [2024-11-29 08:05:33.504482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.504490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:43.598 [2024-11-29 08:05:33.504497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:33:43.598 [2024-11-29 08:05:33.504505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.504592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.598 [2024-11-29 08:05:33.504601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:43.598 [2024-11-29 08:05:33.504609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:43.598 [2024-11-29 08:05:33.504620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.598 [2024-11-29 08:05:33.504723] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:43.598 [2024-11-29 08:05:33.504734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:43.598 [2024-11-29 08:05:33.504743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:43.598 [2024-11-29 08:05:33.504765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:43.598 [2024-11-29 08:05:33.504787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:43.598 [2024-11-29 08:05:33.504801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:43.598 [2024-11-29 08:05:33.504811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:43.598 [2024-11-29 08:05:33.504819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:43.598 [2024-11-29 08:05:33.504826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:43.598 [2024-11-29 08:05:33.504833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:43.598 [2024-11-29 08:05:33.504846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:43.598 [2024-11-29 08:05:33.504860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:43.598 [2024-11-29 08:05:33.504881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:43.598 [2024-11-29 08:05:33.504900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:43.598 [2024-11-29 08:05:33.504922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:43.598 [2024-11-29 08:05:33.504942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:43.598 [2024-11-29 08:05:33.504948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:43.598 [2024-11-29 08:05:33.504954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:43.599 [2024-11-29 08:05:33.504960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:43.599 [2024-11-29 08:05:33.504967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:43.599 [2024-11-29 08:05:33.504974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:43.599 [2024-11-29 08:05:33.504982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:43.599 [2024-11-29 08:05:33.504989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:43.599 [2024-11-29 08:05:33.504995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:43.599 [2024-11-29 08:05:33.505002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:43.599 [2024-11-29 08:05:33.505008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.599 [2024-11-29 08:05:33.505015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:43.599 [2024-11-29 08:05:33.505021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:43.599 [2024-11-29 08:05:33.505029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.599 [2024-11-29 08:05:33.505036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:43.599 [2024-11-29 08:05:33.505045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:43.599 [2024-11-29 08:05:33.505052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:43.599 [2024-11-29 08:05:33.505060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:43.599 [2024-11-29 08:05:33.505071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:43.599 [2024-11-29 08:05:33.505077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:43.599 [2024-11-29 08:05:33.505084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:43.599 [2024-11-29 08:05:33.505090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:43.599 [2024-11-29 08:05:33.505097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:43.599 [2024-11-29 08:05:33.505103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:43.599 [2024-11-29 08:05:33.505112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:43.599 [2024-11-29 08:05:33.505122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:43.599 [2024-11-29 08:05:33.505138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:43.599 [2024-11-29 08:05:33.505145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:43.599 [2024-11-29 08:05:33.505152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:43.599 [2024-11-29 08:05:33.505159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:43.599 [2024-11-29 08:05:33.505166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:43.599 [2024-11-29 08:05:33.505173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:43.599 [2024-11-29 08:05:33.505180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:43.599 [2024-11-29 08:05:33.505188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:43.599 [2024-11-29 08:05:33.505195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:43.599 [2024-11-29 08:05:33.505232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:43.599 [2024-11-29 08:05:33.505240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:43.599 [2024-11-29 08:05:33.505256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:43.599 [2024-11-29 08:05:33.505263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:43.599 [2024-11-29 08:05:33.505272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:43.599 [2024-11-29 08:05:33.505281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.599 [2024-11-29 08:05:33.505289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:43.599 [2024-11-29 08:05:33.505297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:33:43.599 [2024-11-29 08:05:33.505305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.599 [2024-11-29 08:05:33.533431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.599 [2024-11-29 08:05:33.533488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:43.599 [2024-11-29 08:05:33.533500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.084 ms 00:33:43.599 [2024-11-29 08:05:33.533509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.599 [2024-11-29 08:05:33.533596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.599 [2024-11-29 08:05:33.533629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:43.599 [2024-11-29 08:05:33.533642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:43.599 [2024-11-29 08:05:33.533651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.952 [2024-11-29 08:05:33.580761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.952 [2024-11-29 08:05:33.580819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:43.952 [2024-11-29 08:05:33.580833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.050 ms 00:33:43.952 [2024-11-29 08:05:33.580841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.952 [2024-11-29 08:05:33.580898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.952 [2024-11-29 08:05:33.580909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:43.953 [2024-11-29 08:05:33.580919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:43.953 [2024-11-29 08:05:33.580927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.581044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.581057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:43.953 [2024-11-29 08:05:33.581066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:43.953 [2024-11-29 08:05:33.581074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.581203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.581216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:43.953 [2024-11-29 08:05:33.581225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:33:43.953 [2024-11-29 08:05:33.581232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.597735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.597783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:43.953 [2024-11-29 08:05:33.597795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.482 ms 00:33:43.953 [2024-11-29 08:05:33.597804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.597973] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:43.953 [2024-11-29 08:05:33.597988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:43.953 [2024-11-29 08:05:33.598001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.598009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:43.953 [2024-11-29 08:05:33.598019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:43.953 [2024-11-29 08:05:33.598026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.610311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.610352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:43.953 [2024-11-29 08:05:33.610364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:33:43.953 [2024-11-29 08:05:33.610372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.610513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.610524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:43.953 [2024-11-29 08:05:33.610534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:43.953 [2024-11-29 08:05:33.610547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.610602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.610612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:43.953 [2024-11-29 08:05:33.610630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:43.953 [2024-11-29 08:05:33.610637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.611217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.611242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:43.953 [2024-11-29 08:05:33.611251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:33:43.953 [2024-11-29 08:05:33.611260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.611281] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:43.953 [2024-11-29 08:05:33.611293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.611302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:43.953 [2024-11-29 08:05:33.611310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:43.953 [2024-11-29 08:05:33.611318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.624320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:43.953 [2024-11-29 08:05:33.624525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.624538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:43.953 [2024-11-29 08:05:33.624550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.188 ms 00:33:43.953 [2024-11-29 08:05:33.624560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.626776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.626812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:43.953 [2024-11-29 08:05:33.626822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:33:43.953 [2024-11-29 08:05:33.626830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.626931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.626943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:43.953 [2024-11-29 08:05:33.626953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:43.953 [2024-11-29 08:05:33.626961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.626987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.627001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:43.953 [2024-11-29 08:05:33.627010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:43.953 [2024-11-29 08:05:33.627018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.627050] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:43.953 [2024-11-29 08:05:33.627061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.627069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:43.953 [2024-11-29 08:05:33.627077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:43.953 [2024-11-29 08:05:33.627085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.653971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.654025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:43.953 [2024-11-29 08:05:33.654039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.864 ms 00:33:43.953 [2024-11-29 08:05:33.654048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.654143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:43.953 [2024-11-29 08:05:33.654155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:43.953 [2024-11-29 08:05:33.654164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:43.953 [2024-11-29 08:05:33.654173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:43.953 [2024-11-29 08:05:33.655511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.757 ms, result 0 00:33:44.897  [2024-11-29T08:05:35.786Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-29T08:05:36.730Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-29T08:05:37.674Z] Copying: 50/1024 [MB] (19 MBps) [2024-11-29T08:05:39.063Z] Copying: 68/1024 [MB] (18 MBps) [2024-11-29T08:05:40.005Z] Copying: 85/1024 [MB] (16 MBps) [2024-11-29T08:05:40.947Z] Copying: 100/1024 [MB] (15 MBps) [2024-11-29T08:05:41.891Z] Copying: 118/1024 [MB] (18 MBps) [2024-11-29T08:05:42.836Z] Copying: 129/1024 [MB] (10 MBps) [2024-11-29T08:05:43.783Z] Copying: 142588/1048576 [kB] (10236 kBps) [2024-11-29T08:05:44.727Z] Copying: 149/1024 [MB] (10 MBps) [2024-11-29T08:05:45.681Z] Copying: 159/1024 [MB] (10 MBps) [2024-11-29T08:05:47.069Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-29T08:05:48.014Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-29T08:05:48.960Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-29T08:05:49.906Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-29T08:05:50.851Z] Copying: 215424/1048576 [kB] (10164 kBps) [2024-11-29T08:05:51.794Z] Copying: 225632/1048576 [kB] (10208 kBps) [2024-11-29T08:05:52.727Z] Copying: 230/1024 [MB] (10 MBps) [2024-11-29T08:05:54.102Z] Copying: 282/1024 [MB] (52 MBps) [2024-11-29T08:05:55.036Z] Copying: 335/1024 [MB] (52 MBps) [2024-11-29T08:05:55.980Z] Copying: 387/1024 [MB] (52 MBps) [2024-11-29T08:05:56.925Z] Copying: 416/1024 [MB] (28 MBps) [2024-11-29T08:05:57.864Z] Copying: 435/1024 [MB] (19 MBps) [2024-11-29T08:05:58.802Z] Copying: 463/1024 [MB] (28 MBps) [2024-11-29T08:05:59.745Z] Copying: 505/1024 [MB] (41 MBps) [2024-11-29T08:06:00.686Z] Copying: 525/1024 [MB] (20 MBps) [2024-11-29T08:06:02.073Z] Copying: 556/1024 [MB] (30 MBps) [2024-11-29T08:06:03.014Z] Copying: 575/1024 [MB] (18 MBps) [2024-11-29T08:06:03.957Z] Copying: 596/1024 [MB] (20 MBps) [2024-11-29T08:06:04.899Z] Copying: 619/1024 [MB] (23 MBps) [2024-11-29T08:06:05.844Z] Copying: 631/1024 [MB] (12 MBps) [2024-11-29T08:06:06.787Z] Copying: 645/1024 [MB] (13 MBps) [2024-11-29T08:06:07.730Z] Copying: 661/1024 [MB] (15 MBps) [2024-11-29T08:06:08.672Z] Copying: 676/1024 [MB] (14 MBps) [2024-11-29T08:06:10.055Z] Copying: 694/1024 [MB] (18 MBps) [2024-11-29T08:06:10.994Z] Copying: 710/1024 [MB] (15 MBps) [2024-11-29T08:06:11.936Z] Copying: 732/1024 [MB] (21 MBps) [2024-11-29T08:06:12.880Z] Copying: 754/1024 [MB] (22 MBps) [2024-11-29T08:06:13.825Z] Copying: 773/1024 [MB] (18 MBps) [2024-11-29T08:06:14.769Z] Copying: 795/1024 [MB] (22 MBps) [2024-11-29T08:06:15.713Z] Copying: 818/1024 [MB] (22 MBps) [2024-11-29T08:06:17.101Z] Copying: 836/1024 [MB] (18 MBps) [2024-11-29T08:06:17.674Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-29T08:06:19.064Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-29T08:06:20.009Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-29T08:06:20.954Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-29T08:06:21.899Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-29T08:06:22.933Z] Copying: 901/1024 [MB] (11 MBps) [2024-11-29T08:06:23.895Z] Copying: 911/1024 [MB] (10 MBps) [2024-11-29T08:06:24.840Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-29T08:06:25.780Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-29T08:06:26.724Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-29T08:06:27.669Z] Copying: 977184/1048576 [kB] (10224 kBps) [2024-11-29T08:06:29.057Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-29T08:06:29.998Z] Copying: 975/1024 [MB] (10 MBps) [2024-11-29T08:06:30.941Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-29T08:06:31.892Z] Copying: 1001/1024 [MB] (15 MBps) [2024-11-29T08:06:32.838Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-29T08:06:33.783Z] Copying: 1022/1024 [MB] (10 MBps) [2024-11-29T08:06:34.046Z] Copying: 1048484/1048576 [kB] (1264 kBps) [2024-11-29T08:06:34.046Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 08:06:33.808839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.102 [2024-11-29 08:06:33.808914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:44.102 [2024-11-29 08:06:33.808932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:44.102 [2024-11-29 08:06:33.808941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.102 [2024-11-29 08:06:33.811943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:44.102 [2024-11-29 08:06:33.816363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.102 [2024-11-29 08:06:33.816405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:44.102 [2024-11-29 08:06:33.816417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:34:44.102 [2024-11-29 08:06:33.816426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.102 [2024-11-29 08:06:33.828357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.102 [2024-11-29 08:06:33.828418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:44.102 [2024-11-29 08:06:33.828433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.698 ms 00:34:44.102 [2024-11-29 08:06:33.828456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.102 [2024-11-29 08:06:33.828488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.102 [2024-11-29 08:06:33.828498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:44.102 [2024-11-29 08:06:33.828509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:44.102 [2024-11-29 08:06:33.828518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.102 [2024-11-29 08:06:33.828584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.102 [2024-11-29 08:06:33.828601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:44.102 [2024-11-29 08:06:33.828610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:44.102 [2024-11-29 08:06:33.828618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.102 [2024-11-29 08:06:33.828633] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:44.102 [2024-11-29 08:06:33.828646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:34:44.102 [2024-11-29 08:06:33.828656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:44.102 [2024-11-29 08:06:33.828914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.828987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:44.103 [2024-11-29 08:06:33.829458] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:44.103 [2024-11-29 08:06:33.829468] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:34:44.103 [2024-11-29 08:06:33.829477] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:34:44.103 [2024-11-29 08:06:33.829485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126496 00:34:44.103 [2024-11-29 08:06:33.829492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:34:44.103 [2024-11-29 08:06:33.829501] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:44.103 [2024-11-29 08:06:33.829513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:44.103 [2024-11-29 08:06:33.829521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:44.103 [2024-11-29 08:06:33.829528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:44.103 [2024-11-29 08:06:33.829536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:44.103 [2024-11-29 08:06:33.829542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:44.103 [2024-11-29 08:06:33.829550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.103 [2024-11-29 08:06:33.829558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:44.103 [2024-11-29 08:06:33.829567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:34:44.103 [2024-11-29 08:06:33.829575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.103 [2024-11-29 08:06:33.843279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.103 [2024-11-29 08:06:33.843321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:44.103 [2024-11-29 08:06:33.843339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.686 ms 00:34:44.103 [2024-11-29 08:06:33.843347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.103 [2024-11-29 08:06:33.843751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.103 [2024-11-29 08:06:33.843771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:44.103 [2024-11-29 08:06:33.843782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:34:44.103 [2024-11-29 08:06:33.843789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.103 [2024-11-29 08:06:33.880020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.103 [2024-11-29 08:06:33.880072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:44.103 [2024-11-29 08:06:33.880083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.103 [2024-11-29 08:06:33.880093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.103 [2024-11-29 08:06:33.880163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:33.880172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:44.104 [2024-11-29 08:06:33.880182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:33.880192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:33.880245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:33.880257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:44.104 [2024-11-29 08:06:33.880271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:33.880281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:33.880299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:33.880309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:44.104 [2024-11-29 08:06:33.880319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:33.880328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:33.964047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:33.964104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:44.104 [2024-11-29 08:06:33.964117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:33.964126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:44.104 [2024-11-29 08:06:34.033427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:44.104 [2024-11-29 08:06:34.033552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:44.104 [2024-11-29 08:06:34.033622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:44.104 [2024-11-29 08:06:34.033752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:44.104 [2024-11-29 08:06:34.033812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:44.104 [2024-11-29 08:06:34.033881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.033941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.104 [2024-11-29 08:06:34.033952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:44.104 [2024-11-29 08:06:34.033961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.104 [2024-11-29 08:06:34.033969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.104 [2024-11-29 08:06:34.034104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 227.532 ms, result 0 00:34:46.020 00:34:46.020 00:34:46.020 08:06:35 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:46.020 [2024-11-29 08:06:35.576174] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:46.020 [2024-11-29 08:06:35.576325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86040 ] 00:34:46.020 [2024-11-29 08:06:35.740859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.020 [2024-11-29 08:06:35.857103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.282 [2024-11-29 08:06:36.151135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:46.282 [2024-11-29 08:06:36.151220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:46.545 [2024-11-29 08:06:36.312437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.312513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:46.545 [2024-11-29 08:06:36.312529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:46.545 [2024-11-29 08:06:36.312538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.312594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.312608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:46.545 [2024-11-29 08:06:36.312618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:46.545 [2024-11-29 08:06:36.312626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.312647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:46.545 [2024-11-29 08:06:36.313390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:46.545 [2024-11-29 08:06:36.313417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.313425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:46.545 [2024-11-29 08:06:36.313435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:34:46.545 [2024-11-29 08:06:36.313458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.313929] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:46.545 [2024-11-29 08:06:36.314001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.314016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:46.545 [2024-11-29 08:06:36.314026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:34:46.545 [2024-11-29 08:06:36.314036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.314089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.314104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:46.545 [2024-11-29 08:06:36.314112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:46.545 [2024-11-29 08:06:36.314120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.314399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.314417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:46.545 [2024-11-29 08:06:36.314426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:34:46.545 [2024-11-29 08:06:36.314434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.314531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.314542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:46.545 [2024-11-29 08:06:36.314551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:34:46.545 [2024-11-29 08:06:36.314559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.314581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.314591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:46.545 [2024-11-29 08:06:36.314602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:46.545 [2024-11-29 08:06:36.314610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.314631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:46.545 [2024-11-29 08:06:36.318878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.318917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:46.545 [2024-11-29 08:06:36.318928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:34:46.545 [2024-11-29 08:06:36.318937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.318972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.545 [2024-11-29 08:06:36.318982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:46.545 [2024-11-29 08:06:36.318992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:46.545 [2024-11-29 08:06:36.319001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.545 [2024-11-29 08:06:36.319058] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:46.545 [2024-11-29 08:06:36.319085] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:46.546 [2024-11-29 08:06:36.319127] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:46.546 [2024-11-29 08:06:36.319146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:46.546 [2024-11-29 08:06:36.319253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:46.546 [2024-11-29 08:06:36.319266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:46.546 [2024-11-29 08:06:36.319278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:46.546 [2024-11-29 08:06:36.319289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:46.546 [2024-11-29 08:06:36.319321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:46.546 [2024-11-29 08:06:36.319331] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:46.546 [2024-11-29 08:06:36.319340] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:46.546 [2024-11-29 08:06:36.319348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.546 [2024-11-29 08:06:36.319356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:46.546 [2024-11-29 08:06:36.319364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:34:46.546 [2024-11-29 08:06:36.319372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.546 [2024-11-29 08:06:36.319468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.546 [2024-11-29 08:06:36.319478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:46.546 [2024-11-29 08:06:36.319486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:34:46.546 [2024-11-29 08:06:36.319496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.546 [2024-11-29 08:06:36.319600] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:46.546 [2024-11-29 08:06:36.319611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:46.546 [2024-11-29 08:06:36.319619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:46.546 [2024-11-29 08:06:36.319646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:46.546 [2024-11-29 08:06:36.319667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:46.546 [2024-11-29 08:06:36.319680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:46.546 [2024-11-29 08:06:36.319689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:46.546 [2024-11-29 08:06:36.319696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:46.546 [2024-11-29 08:06:36.319703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:46.546 [2024-11-29 08:06:36.319710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:46.546 [2024-11-29 08:06:36.319723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:46.546 [2024-11-29 08:06:36.319736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:46.546 [2024-11-29 08:06:36.319757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:46.546 [2024-11-29 08:06:36.319776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:46.546 [2024-11-29 08:06:36.319795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:46.546 [2024-11-29 08:06:36.319816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:46.546 [2024-11-29 08:06:36.319836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:46.546 [2024-11-29 08:06:36.319849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:46.546 [2024-11-29 08:06:36.319855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:46.546 [2024-11-29 08:06:36.319861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:46.546 [2024-11-29 08:06:36.319868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:46.546 [2024-11-29 08:06:36.319875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:46.546 [2024-11-29 08:06:36.319882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:46.546 [2024-11-29 08:06:36.319895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:46.546 [2024-11-29 08:06:36.319900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319908] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:46.546 [2024-11-29 08:06:36.319916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:46.546 [2024-11-29 08:06:36.319924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.546 [2024-11-29 08:06:36.319943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:46.546 [2024-11-29 08:06:36.319950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:46.546 [2024-11-29 08:06:36.319957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:46.546 [2024-11-29 08:06:36.319965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:46.546 [2024-11-29 08:06:36.319971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:46.546 [2024-11-29 08:06:36.319979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:46.546 [2024-11-29 08:06:36.319987] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:46.546 [2024-11-29 08:06:36.319997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:46.546 [2024-11-29 08:06:36.320006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:46.546 [2024-11-29 08:06:36.320014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:46.546 [2024-11-29 08:06:36.320021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:46.546 [2024-11-29 08:06:36.320028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:46.546 [2024-11-29 08:06:36.320035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:46.547 [2024-11-29 08:06:36.320042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:46.547 [2024-11-29 08:06:36.320051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:46.547 [2024-11-29 08:06:36.320058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:46.547 [2024-11-29 08:06:36.320065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:46.547 [2024-11-29 08:06:36.320072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:46.547 [2024-11-29 08:06:36.320107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:46.547 [2024-11-29 08:06:36.320116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:46.547 [2024-11-29 08:06:36.320130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:46.547 [2024-11-29 08:06:36.320137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:46.547 [2024-11-29 08:06:36.320144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:46.547 [2024-11-29 08:06:36.320155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.320163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:46.547 [2024-11-29 08:06:36.320170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:34:46.547 [2024-11-29 08:06:36.320178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.347769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.347811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:46.547 [2024-11-29 08:06:36.347823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.548 ms 00:34:46.547 [2024-11-29 08:06:36.347831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.347919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.347928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:46.547 [2024-11-29 08:06:36.347941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:46.547 [2024-11-29 08:06:36.347949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.396813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.396864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:46.547 [2024-11-29 08:06:36.396877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.808 ms 00:34:46.547 [2024-11-29 08:06:36.396886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.396937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.396948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:46.547 [2024-11-29 08:06:36.396957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:46.547 [2024-11-29 08:06:36.396965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.397079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.397091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:46.547 [2024-11-29 08:06:36.397101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:46.547 [2024-11-29 08:06:36.397109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.397236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.397249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:46.547 [2024-11-29 08:06:36.397258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:34:46.547 [2024-11-29 08:06:36.397266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.412867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.412910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:46.547 [2024-11-29 08:06:36.412921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:34:46.547 [2024-11-29 08:06:36.412929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.413084] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:46.547 [2024-11-29 08:06:36.413098] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:46.547 [2024-11-29 08:06:36.413111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.413120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:46.547 [2024-11-29 08:06:36.413129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:46.547 [2024-11-29 08:06:36.413136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.425451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.425490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:46.547 [2024-11-29 08:06:36.425502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:34:46.547 [2024-11-29 08:06:36.425511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.425636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.425646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:46.547 [2024-11-29 08:06:36.425654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:34:46.547 [2024-11-29 08:06:36.425666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.425730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.425850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:46.547 [2024-11-29 08:06:36.425862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:46.547 [2024-11-29 08:06:36.425880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.426489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.426520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:46.547 [2024-11-29 08:06:36.426530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:34:46.547 [2024-11-29 08:06:36.426538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.426561] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:46.547 [2024-11-29 08:06:36.426571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.426580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:46.547 [2024-11-29 08:06:36.426588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:46.547 [2024-11-29 08:06:36.426596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.547 [2024-11-29 08:06:36.439097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:46.547 [2024-11-29 08:06:36.439252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.547 [2024-11-29 08:06:36.439263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:46.547 [2024-11-29 08:06:36.439274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.636 ms 00:34:46.547 [2024-11-29 08:06:36.439281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.441587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.441618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:46.548 [2024-11-29 08:06:36.441628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:34:46.548 [2024-11-29 08:06:36.441637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.441738] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:46.548 [2024-11-29 08:06:36.442218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.442239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:46.548 [2024-11-29 08:06:36.442249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:34:46.548 [2024-11-29 08:06:36.442257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.442291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.442300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:46.548 [2024-11-29 08:06:36.442308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:46.548 [2024-11-29 08:06:36.442315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.442349] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:46.548 [2024-11-29 08:06:36.442359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.442367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:46.548 [2024-11-29 08:06:36.442375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:46.548 [2024-11-29 08:06:36.442383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.468644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.468691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:46.548 [2024-11-29 08:06:36.468705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.244 ms 00:34:46.548 [2024-11-29 08:06:36.468713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.468802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.548 [2024-11-29 08:06:36.468812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:46.548 [2024-11-29 08:06:36.468823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:46.548 [2024-11-29 08:06:36.468830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.548 [2024-11-29 08:06:36.470165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.248 ms, result 0 00:34:47.936  [2024-11-29T08:06:38.824Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-29T08:06:39.767Z] Copying: 32/1024 [MB] (18 MBps) [2024-11-29T08:06:40.707Z] Copying: 50/1024 [MB] (18 MBps) [2024-11-29T08:06:42.093Z] Copying: 62/1024 [MB] (11 MBps) [2024-11-29T08:06:43.038Z] Copying: 81/1024 [MB] (19 MBps) [2024-11-29T08:06:43.983Z] Copying: 94/1024 [MB] (12 MBps) [2024-11-29T08:06:44.929Z] Copying: 117/1024 [MB] (23 MBps) [2024-11-29T08:06:45.869Z] Copying: 131/1024 [MB] (13 MBps) [2024-11-29T08:06:46.814Z] Copying: 151/1024 [MB] (20 MBps) [2024-11-29T08:06:47.815Z] Copying: 177/1024 [MB] (25 MBps) [2024-11-29T08:06:48.756Z] Copying: 197/1024 [MB] (20 MBps) [2024-11-29T08:06:49.697Z] Copying: 219/1024 [MB] (22 MBps) [2024-11-29T08:06:51.078Z] Copying: 243/1024 [MB] (23 MBps) [2024-11-29T08:06:52.017Z] Copying: 262/1024 [MB] (18 MBps) [2024-11-29T08:06:52.959Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-29T08:06:53.905Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-29T08:06:54.850Z] Copying: 309/1024 [MB] (12 MBps) [2024-11-29T08:06:55.793Z] Copying: 324/1024 [MB] (15 MBps) [2024-11-29T08:06:56.737Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-29T08:06:57.681Z] Copying: 355/1024 [MB] (13 MBps) [2024-11-29T08:06:59.066Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-29T08:07:00.009Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-29T08:07:00.954Z] Copying: 389/1024 [MB] (11 MBps) [2024-11-29T08:07:01.898Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-29T08:07:02.844Z] Copying: 425/1024 [MB] (24 MBps) [2024-11-29T08:07:03.789Z] Copying: 448/1024 [MB] (22 MBps) [2024-11-29T08:07:04.734Z] Copying: 461/1024 [MB] (13 MBps) [2024-11-29T08:07:05.680Z] Copying: 477/1024 [MB] (16 MBps) [2024-11-29T08:07:07.068Z] Copying: 496/1024 [MB] (18 MBps) [2024-11-29T08:07:08.014Z] Copying: 517/1024 [MB] (21 MBps) [2024-11-29T08:07:08.959Z] Copying: 537/1024 [MB] (19 MBps) [2024-11-29T08:07:09.903Z] Copying: 553/1024 [MB] (16 MBps) [2024-11-29T08:07:10.849Z] Copying: 566/1024 [MB] (13 MBps) [2024-11-29T08:07:11.792Z] Copying: 581/1024 [MB] (14 MBps) [2024-11-29T08:07:12.738Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-29T08:07:13.684Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-29T08:07:15.072Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-29T08:07:16.016Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-29T08:07:16.961Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-29T08:07:17.908Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-29T08:07:18.854Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-29T08:07:19.799Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-29T08:07:20.787Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-29T08:07:21.783Z] Copying: 703/1024 [MB] (18 MBps) [2024-11-29T08:07:22.730Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-29T08:07:23.677Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-29T08:07:25.068Z] Copying: 738/1024 [MB] (13 MBps) [2024-11-29T08:07:26.013Z] Copying: 756/1024 [MB] (18 MBps) [2024-11-29T08:07:26.960Z] Copying: 774/1024 [MB] (17 MBps) [2024-11-29T08:07:27.906Z] Copying: 786/1024 [MB] (12 MBps) [2024-11-29T08:07:28.851Z] Copying: 800/1024 [MB] (14 MBps) [2024-11-29T08:07:29.796Z] Copying: 815/1024 [MB] (14 MBps) [2024-11-29T08:07:30.742Z] Copying: 833/1024 [MB] (17 MBps) [2024-11-29T08:07:31.683Z] Copying: 853/1024 [MB] (20 MBps) [2024-11-29T08:07:33.069Z] Copying: 871/1024 [MB] (17 MBps) [2024-11-29T08:07:34.014Z] Copying: 889/1024 [MB] (17 MBps) [2024-11-29T08:07:34.957Z] Copying: 904/1024 [MB] (15 MBps) [2024-11-29T08:07:35.897Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-29T08:07:36.839Z] Copying: 933/1024 [MB] (16 MBps) [2024-11-29T08:07:37.782Z] Copying: 944/1024 [MB] (11 MBps) [2024-11-29T08:07:38.729Z] Copying: 959/1024 [MB] (14 MBps) [2024-11-29T08:07:39.673Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-29T08:07:40.610Z] Copying: 997/1024 [MB] (20 MBps) [2024-11-29T08:07:40.610Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 08:07:40.457300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.666 [2024-11-29 08:07:40.457390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:50.666 [2024-11-29 08:07:40.457410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:50.666 [2024-11-29 08:07:40.457423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.666 [2024-11-29 08:07:40.457470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:50.666 [2024-11-29 08:07:40.461418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.666 [2024-11-29 08:07:40.461468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:50.666 [2024-11-29 08:07:40.461485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:35:50.666 [2024-11-29 08:07:40.461504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.666 [2024-11-29 08:07:40.461826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.666 [2024-11-29 08:07:40.461849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:50.666 [2024-11-29 08:07:40.461862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:35:50.666 [2024-11-29 08:07:40.461874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.666 [2024-11-29 08:07:40.461912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.666 [2024-11-29 08:07:40.461926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:50.666 [2024-11-29 08:07:40.461938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:50.666 [2024-11-29 08:07:40.461949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.666 [2024-11-29 08:07:40.462013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.666 [2024-11-29 08:07:40.462035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:50.666 [2024-11-29 08:07:40.462048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:50.666 [2024-11-29 08:07:40.462059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.666 [2024-11-29 08:07:40.462077] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:50.666 [2024-11-29 08:07:40.462095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:50.666 [2024-11-29 08:07:40.462109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:50.666 [2024-11-29 08:07:40.462712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.462997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:50.667 [2024-11-29 08:07:40.463266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:50.667 [2024-11-29 08:07:40.463278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81b9f93a-4e3e-4f17-ae2b-60de48297168 00:35:50.667 [2024-11-29 08:07:40.463290] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:50.667 [2024-11-29 08:07:40.463300] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4640 00:35:50.667 [2024-11-29 08:07:40.463311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:35:50.667 [2024-11-29 08:07:40.463325] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0069 00:35:50.667 [2024-11-29 08:07:40.463336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:50.667 [2024-11-29 08:07:40.463348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:50.667 [2024-11-29 08:07:40.463359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:50.667 [2024-11-29 08:07:40.463368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:50.667 [2024-11-29 08:07:40.463378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:50.667 [2024-11-29 08:07:40.463388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.667 [2024-11-29 08:07:40.463400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:50.667 [2024-11-29 08:07:40.463411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:35:50.667 [2024-11-29 08:07:40.463422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.477627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.667 [2024-11-29 08:07:40.477661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:50.667 [2024-11-29 08:07:40.477678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.186 ms 00:35:50.667 [2024-11-29 08:07:40.477687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.478055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:50.667 [2024-11-29 08:07:40.478077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:50.667 [2024-11-29 08:07:40.478085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:35:50.667 [2024-11-29 08:07:40.478093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.513064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.667 [2024-11-29 08:07:40.513095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:50.667 [2024-11-29 08:07:40.513106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.667 [2024-11-29 08:07:40.513114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.513175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.667 [2024-11-29 08:07:40.513185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:50.667 [2024-11-29 08:07:40.513194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.667 [2024-11-29 08:07:40.513203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.513250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.667 [2024-11-29 08:07:40.513265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:50.667 [2024-11-29 08:07:40.513274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.667 [2024-11-29 08:07:40.513283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.513300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.667 [2024-11-29 08:07:40.513309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:50.667 [2024-11-29 08:07:40.513317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.667 [2024-11-29 08:07:40.513326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.667 [2024-11-29 08:07:40.595459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.667 [2024-11-29 08:07:40.595499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:50.667 [2024-11-29 08:07:40.595511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.667 [2024-11-29 08:07:40.595519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:50.926 [2024-11-29 08:07:40.662395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:50.926 [2024-11-29 08:07:40.662516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:50.926 [2024-11-29 08:07:40.662577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:50.926 [2024-11-29 08:07:40.662675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:50.926 [2024-11-29 08:07:40.662727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:50.926 [2024-11-29 08:07:40.662790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:50.926 [2024-11-29 08:07:40.662852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:50.926 [2024-11-29 08:07:40.662861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:50.926 [2024-11-29 08:07:40.662868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:50.926 [2024-11-29 08:07:40.662996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 205.667 ms, result 0 00:35:51.494 00:35:51.494 00:35:51.494 08:07:41 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:54.027 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:54.027 Process with pid 83962 is not found 00:35:54.027 Remove shared memory files 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83962 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83962 ']' 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83962 00:35:54.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83962) - No such process 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83962 is not found' 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_band_md /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_l2p_l1 /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_l2p_l2 /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_l2p_l2_ctx /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_nvc_md /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_p2l_pool /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_sb /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_sb_shm /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_trim_bitmap /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_trim_log /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_trim_md /dev/hugepages/ftl_81b9f93a-4e3e-4f17-ae2b-60de48297168_vmap 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:54.027 00:35:54.027 real 4m33.557s 00:35:54.027 user 4m21.656s 00:35:54.027 sys 0m11.708s 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.027 08:07:43 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:54.027 ************************************ 00:35:54.027 END TEST ftl_restore_fast 00:35:54.027 ************************************ 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@14 -- # killprocess 74882 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@954 -- # '[' -z 74882 ']' 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@958 -- # kill -0 74882 00:35:54.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74882) - No such process 00:35:54.027 Process with pid 74882 is not found 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74882 is not found' 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86748 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:54.027 08:07:43 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86748 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 86748 ']' 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.027 08:07:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:54.027 [2024-11-29 08:07:43.767000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:35:54.027 [2024-11-29 08:07:43.767113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86748 ] 00:35:54.027 [2024-11-29 08:07:43.923052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.285 [2024-11-29 08:07:44.029915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.851 08:07:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.851 08:07:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:54.851 08:07:44 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:55.109 nvme0n1 00:35:55.109 08:07:44 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:55.109 08:07:44 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:55.109 08:07:44 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:55.366 08:07:45 ftl -- ftl/common.sh@28 -- # stores=cafb0aff-fe81-49e5-bd34-8bd1482b2331 00:35:55.366 08:07:45 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:55.366 08:07:45 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cafb0aff-fe81-49e5-bd34-8bd1482b2331 00:35:55.625 08:07:45 ftl -- ftl/ftl.sh@23 -- # killprocess 86748 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@954 -- # '[' -z 86748 ']' 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@958 -- # kill -0 86748 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@959 -- # uname 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86748 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:55.625 killing process with pid 86748 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86748' 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@973 -- # kill 86748 00:35:55.625 08:07:45 ftl -- common/autotest_common.sh@978 -- # wait 86748 00:35:57.001 08:07:46 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:57.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:57.264 Waiting for block devices as requested 00:35:57.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.523 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:02.813 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:02.813 08:07:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:02.813 Remove shared memory files 00:36:02.813 08:07:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:02.813 08:07:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:02.813 08:07:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:02.813 08:07:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:02.813 08:07:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:02.813 08:07:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:02.813 ************************************ 00:36:02.813 END TEST ftl 00:36:02.813 ************************************ 00:36:02.813 00:36:02.813 real 18m10.407s 00:36:02.813 user 20m18.418s 00:36:02.813 sys 1m33.668s 00:36:02.813 08:07:52 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.813 08:07:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:02.813 08:07:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:02.813 08:07:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:02.813 08:07:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:02.813 08:07:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:02.813 08:07:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:02.813 08:07:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:02.813 08:07:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:02.813 08:07:52 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:02.813 08:07:52 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:02.813 08:07:52 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:02.813 08:07:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.813 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:36:02.813 08:07:52 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:02.813 08:07:52 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:02.813 08:07:52 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:02.813 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:36:03.769 INFO: APP EXITING 00:36:03.769 INFO: killing all VMs 00:36:03.769 INFO: killing vhost app 00:36:03.769 INFO: EXIT DONE 00:36:04.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:04.681 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:04.681 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:04.681 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:04.681 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:04.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:05.514 Cleaning 00:36:05.514 Removing: /var/run/dpdk/spdk0/config 00:36:05.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:05.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:05.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:05.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:05.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:05.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:05.514 Removing: /var/run/dpdk/spdk0 00:36:05.514 Removing: /var/run/dpdk/spdk_pid56869 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57060 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57267 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57366 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57400 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57522 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57539 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57728 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57815 00:36:05.514 Removing: /var/run/dpdk/spdk_pid57911 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58016 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58108 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58147 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58184 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58254 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58344 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58769 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58822 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58885 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58895 00:36:05.514 Removing: /var/run/dpdk/spdk_pid58992 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59008 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59099 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59115 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59168 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59186 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59238 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59252 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59406 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59437 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59526 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59698 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59777 00:36:05.514 Removing: /var/run/dpdk/spdk_pid59813 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60234 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60332 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60441 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60494 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60520 00:36:05.514 Removing: /var/run/dpdk/spdk_pid60598 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61213 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61254 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61712 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61811 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61920 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61973 00:36:05.514 Removing: /var/run/dpdk/spdk_pid61993 00:36:05.514 Removing: /var/run/dpdk/spdk_pid62024 00:36:05.514 Removing: /var/run/dpdk/spdk_pid63861 00:36:05.514 Removing: /var/run/dpdk/spdk_pid63998 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64002 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64014 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64060 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64064 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64076 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64121 00:36:05.514 Removing: /var/run/dpdk/spdk_pid64125 00:36:05.515 Removing: /var/run/dpdk/spdk_pid64137 00:36:05.515 Removing: /var/run/dpdk/spdk_pid64183 00:36:05.515 Removing: /var/run/dpdk/spdk_pid64187 00:36:05.515 Removing: /var/run/dpdk/spdk_pid64199 00:36:05.515 Removing: /var/run/dpdk/spdk_pid65595 00:36:05.515 Removing: /var/run/dpdk/spdk_pid65692 00:36:05.515 Removing: /var/run/dpdk/spdk_pid67093 00:36:05.515 Removing: /var/run/dpdk/spdk_pid68839 00:36:05.515 Removing: /var/run/dpdk/spdk_pid68913 00:36:05.515 Removing: /var/run/dpdk/spdk_pid68988 00:36:05.515 Removing: /var/run/dpdk/spdk_pid69098 00:36:05.515 Removing: /var/run/dpdk/spdk_pid69190 00:36:05.515 Removing: /var/run/dpdk/spdk_pid69284 00:36:05.515 Removing: /var/run/dpdk/spdk_pid69354 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69435 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69540 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69636 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69733 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69802 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69883 00:36:05.776 Removing: /var/run/dpdk/spdk_pid69987 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70079 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70180 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70250 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70325 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70435 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70521 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70622 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70702 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70771 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70845 00:36:05.776 Removing: /var/run/dpdk/spdk_pid70920 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71029 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71133 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71229 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71303 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71376 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71446 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71520 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71629 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71714 00:36:05.776 Removing: /var/run/dpdk/spdk_pid71859 00:36:05.776 Removing: /var/run/dpdk/spdk_pid72143 00:36:05.776 Removing: /var/run/dpdk/spdk_pid72184 00:36:05.776 Removing: /var/run/dpdk/spdk_pid72639 00:36:05.776 Removing: /var/run/dpdk/spdk_pid72823 00:36:05.776 Removing: /var/run/dpdk/spdk_pid72920 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73031 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73081 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73112 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73397 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73459 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73538 00:36:05.776 Removing: /var/run/dpdk/spdk_pid73934 00:36:05.776 Removing: /var/run/dpdk/spdk_pid74077 00:36:05.776 Removing: /var/run/dpdk/spdk_pid74882 00:36:05.776 Removing: /var/run/dpdk/spdk_pid75014 00:36:05.776 Removing: /var/run/dpdk/spdk_pid75178 00:36:05.776 Removing: /var/run/dpdk/spdk_pid75275 00:36:05.776 Removing: /var/run/dpdk/spdk_pid75655 00:36:05.776 Removing: /var/run/dpdk/spdk_pid75935 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76293 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76487 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76683 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76730 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76890 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76925 00:36:05.776 Removing: /var/run/dpdk/spdk_pid76979 00:36:05.776 Removing: /var/run/dpdk/spdk_pid77289 00:36:05.776 Removing: /var/run/dpdk/spdk_pid77521 00:36:05.776 Removing: /var/run/dpdk/spdk_pid78109 00:36:05.776 Removing: /var/run/dpdk/spdk_pid78784 00:36:05.776 Removing: /var/run/dpdk/spdk_pid79413 00:36:05.776 Removing: /var/run/dpdk/spdk_pid80244 00:36:05.776 Removing: /var/run/dpdk/spdk_pid80386 00:36:05.776 Removing: /var/run/dpdk/spdk_pid80473 00:36:05.776 Removing: /var/run/dpdk/spdk_pid80872 00:36:05.776 Removing: /var/run/dpdk/spdk_pid80926 00:36:05.776 Removing: /var/run/dpdk/spdk_pid81755 00:36:05.776 Removing: /var/run/dpdk/spdk_pid82188 00:36:05.776 Removing: /var/run/dpdk/spdk_pid82920 00:36:05.776 Removing: /var/run/dpdk/spdk_pid83042 00:36:05.776 Removing: /var/run/dpdk/spdk_pid83085 00:36:05.776 Removing: /var/run/dpdk/spdk_pid83149 00:36:05.776 Removing: /var/run/dpdk/spdk_pid83210 00:36:05.776 Removing: /var/run/dpdk/spdk_pid83274 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83458 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83552 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83619 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83675 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83710 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83799 00:36:05.777 Removing: /var/run/dpdk/spdk_pid83962 00:36:05.777 Removing: /var/run/dpdk/spdk_pid84187 00:36:05.777 Removing: /var/run/dpdk/spdk_pid84769 00:36:05.777 Removing: /var/run/dpdk/spdk_pid85417 00:36:05.777 Removing: /var/run/dpdk/spdk_pid86040 00:36:05.777 Removing: /var/run/dpdk/spdk_pid86748 00:36:05.777 Clean 00:36:06.038 08:07:55 -- common/autotest_common.sh@1453 -- # return 0 00:36:06.038 08:07:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:06.038 08:07:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:06.038 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:36:06.038 08:07:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:06.038 08:07:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:06.038 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:36:06.038 08:07:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:06.038 08:07:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:06.038 08:07:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:06.038 08:07:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:06.038 08:07:55 -- spdk/autotest.sh@398 -- # hostname 00:36:06.038 08:07:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:06.299 geninfo: WARNING: invalid characters removed from testname! 00:36:32.911 08:08:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:34.297 08:08:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:36.841 08:08:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:39.386 08:08:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:41.290 08:08:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:43.833 08:08:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:46.387 08:08:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.387 08:08:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:46.387 08:08:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:46.387 08:08:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:46.387 08:08:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:46.387 08:08:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:46.387 + [[ -n 5040 ]] 00:36:46.387 + sudo kill 5040 00:36:46.397 [Pipeline] } 00:36:46.413 [Pipeline] // timeout 00:36:46.418 [Pipeline] } 00:36:46.465 [Pipeline] // stage 00:36:46.494 [Pipeline] } 00:36:46.511 [Pipeline] // catchError 00:36:46.518 [Pipeline] stage 00:36:46.520 [Pipeline] { (Stop VM) 00:36:46.528 [Pipeline] sh 00:36:46.809 + vagrant halt 00:36:49.357 ==> default: Halting domain... 00:36:55.953 [Pipeline] sh 00:36:56.234 + vagrant destroy -f 00:36:58.783 ==> default: Removing domain... 00:36:59.742 [Pipeline] sh 00:37:00.046 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:00.122 [Pipeline] } 00:37:00.136 [Pipeline] // stage 00:37:00.142 [Pipeline] } 00:37:00.157 [Pipeline] // dir 00:37:00.162 [Pipeline] } 00:37:00.178 [Pipeline] // wrap 00:37:00.186 [Pipeline] } 00:37:00.198 [Pipeline] // catchError 00:37:00.207 [Pipeline] stage 00:37:00.209 [Pipeline] { (Epilogue) 00:37:00.222 [Pipeline] sh 00:37:00.508 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:05.795 [Pipeline] catchError 00:37:05.797 [Pipeline] { 00:37:05.806 [Pipeline] sh 00:37:06.086 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:06.086 Artifacts sizes are good 00:37:06.095 [Pipeline] } 00:37:06.108 [Pipeline] // catchError 00:37:06.120 [Pipeline] archiveArtifacts 00:37:06.127 Archiving artifacts 00:37:06.226 [Pipeline] cleanWs 00:37:06.239 [WS-CLEANUP] Deleting project workspace... 00:37:06.239 [WS-CLEANUP] Deferred wipeout is used... 00:37:06.246 [WS-CLEANUP] done 00:37:06.248 [Pipeline] } 00:37:06.264 [Pipeline] // stage 00:37:06.270 [Pipeline] } 00:37:06.287 [Pipeline] // node 00:37:06.294 [Pipeline] End of Pipeline 00:37:06.330 Finished: SUCCESS